Tuesday, March 31, 2026

Ansible_1

 Playbook

In Ansible, a script is called a playbook. 

A playbook describes which hosts to configure, and an ordered list of tasks to perform on those hosts. 

The playbook can be execute by using the ansible-playbook command

Ansible’s playbook syntax is built on top of YAML


$ ansible-playbook <name>.yml


Ansible will make SSH connections in parallel

It will execute the first task on the list on all hosts simultaneously.


eg:- 1st tasks 

- name: Install nginx

  apt: name=nginx


Ansible will do the following:

1. Generate a Python script that installs the required package

2. Copy the script to host1, host2, and host3

3. Execute the script on host1, host2, and host3

4. Wait for the script to complete execution on all hosts




Ansible will then move to the next task in the list, and go through these same four steps. It’s important to note the following:

• Ansible runs each task in parallel across all hosts.

• Ansible waits until all hosts have completed a task before moving to the next task.

• Ansible runs the tasks in the order that you specify them.


Simple Terms

Ansible playbooks as executable documentation. It’s like the README file that describes the commands you had to type out to deploy your software, except that the instructions will never go out-of-date because they are also the

code that gets executed directly.


Remote Hosts

To manage a server with Ansible, the server needs to have SSH and Python 2.5 or later installed, or Python 2.4 with the Python simplejson library installed. 

There’s no need to preinstall an agent or any other software on the host.


Control machine 

The control machine (the one that you use to control remote machines) needs to have Python 2.6 or later installed

Ansible

Name Ansible

It’s a science-fiction reference

An ansible is a fictional communication device that can transfer information faster than the speed of light. 

Ursula K. Le Guin invented the concept in her book Rocannon’s World, and other sci-fi authors have since borrowed the idea from Le Guin.


Michael DeHaan took the name Ansible from the book Ender’s Game by Orson Scott Card. In that book, the ansible was used to control many remote ships at once, over vast distances. Think of it as a metaphor for controlling remote servers.

Michael DeHaan :- Creator of Ansible software


Ansible started as a simple side project in February of 2012


Configuration Management

we are typically about writing some kind of state description for our servers, and then using a tool to enforce that the servers are, indeed, in that state: the right packages are installed, configuration files contain the expected values and have the expected permissions, the right services are running, and so on.


Deployment

They are usually referring to the process of taking software that was written in-house, generating binaries or static assets (if necessary), copying the required files to the server(s), and then starting up the services.



Ansible is a great tool for deployment as well as configuration management. 

Monday, February 2, 2026

Linux Booting Issues

 Boot Issues


grub> set root=(hd0,msdos1)

grub> linux /boot/vmlinuz-(version) root=/dev/sda1   optional ro or rw

grub> initrd /boot/initrd(version).img

grub> boot



$ ls -l /

vmlinuz -> boot/vmlinuz-3.13.0-29-generic

initrd.img -> boot/initrd.img-3.13.0-29-generic

So you could boot from grub> like this:


grub> set root=(hd0,1)

grub> linux /vmlinuz root=/dev/sda1

grub> initrd /initrd.img

grub> boot

Sunday, September 7, 2025

iTerm

 When you open iTerm 



zsh compinit: insecure directories, run compaudit for list.

Ignore insecure directories and continue [y] or abort compinit [n]? ncompinit: initialization aborted




alagarasanramadoss@mbp ~ % compaudit

There are insecure directories:

/usr/local/share/zsh/site-functions

/usr/local/share/zsh

alagarasanramadoss@mbp ~ %


alagarasanramadoss@mbp ~ % autoload

 sudo chown -R root:wheel /usr/local/share/zsh /usr/local/share/zsh/site-functions

sudo chmod -R 755 /usr/local/share/zsh /usr/local/share/zsh/site-functions


Password:

alagarasanramadoss@mbp ~ % autoload -Uz compinit && compinit

Sunday, August 24, 2025

Container Runtime Interface

Install CRI-O (Container Runtime) 

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes.


runc:

runc is a low-level container runtime that directly interacts with the Linux kernel to create and run containers.

runc provides the basic functionality for creating and running containers, while containerd provides a more complete environment for managing and orchestrating container workloads.

ctr is unsupported debug and administrative client for interacting with the containerd daemon.

Because it is unsupported, the commands,options, and operations are not guaranteed to be backward compatible or stable from release to release of the containerd project.


Linux OS -> runc -> CRI


https://github.com/cri-o/cri-o/releases

https://github.com/opencontainers/runtime-tools

https://github.com/kubernetes-sigs/cri-tools/releases


# dnf install container-selinux cri-o cri-tools

# systemctl enable --now crio


Socket File

/var/run/crio/crio.sock 


Install Containerd

Containerd is an open-source CRI (Container Runtime Interface) compatible container runtime. It is created by Docker and donated to CNCF.


# dnf install -y yum-utils

# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# dnf install -y containerd.io


Generate a default configuration file for Containerd, and then we need to modify it as needed 

SystemdCgroup to true 


# mkdir -p /etc/containerd

# containerd config default > /etc/containerd/config.toml

Edit /etc/containerd/config.toml if necessary, for example:

# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# systemctl enable --now containerd

Socket File
var/run/containerd/containerd.sock

Install cri-tools

# curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-${ARCH}.tar.gz --output crictl-${VERSION}-linux-${ARCH}.tar.gz

Extract the downloaded archive and move the crictl binary to a directory within your system's PATH, such as /usr/local/bin/

# tar zxvf crictl-${VERSION}-linux-${ARCH}.tar.gz -C /usr/local/bin

# crictl --version

# crictl info

# crictl info | grep -i containerd







Pull Image

# crictl pull  hello-world:latest

# crictl pull  alpine:latest

List container images 

# crictl images

# crictl images nginx

# crictl images -q # List Image Ids



List all containers:

# crictl ps -a

# crictl ps # List Running Containers


Execute a command in a running container

# crictl exec -i -t <Container-ID> ls

Get Container Logs

# crictl logs <Container-ID>

# crictl logs --tail=1 <Container-ID>


# crictl stop <Container-ID>

# crictl stats <Container-ID>

# crictl inspect <Container-ID>

# crictl rm 


List Pod resourse usage statistics

# crictl statsp <Container-ID>


To set container registries and set priority, edit the file:

# vi /etc/containers/registries.conf

eg 

unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "registry.centos.org", "docker.io"]



List pods

crictl pods

crictl pods --name <name>


# ctr events


Port Forward

crictl port-forward command allows you to forward local ports to containers running in a Kubernetes CRI (Container Runtime Interface) environment. Here's a comprehensive guide:

# crictl port-forward <container-id> [local-port]:<container-port>


Forward local port 8080 to container port 80

# crictl port-forward <container-id> 8080:80 & $ (&) for background


# Let crictl choose an available local port

crictl port-forward <container-id> :80 


Forward Multiple Ports

Forward multiple ports simultaneously

# crictl port-forward <container-id> 8080:80 8443:443

Pull directly from a registry 

# crictl pull docker.io/nginx:latest


Import a tarball with a specific image name and tag

# crictl import ubuntu-container.tar ubuntu-custom:latest


curl to download and import directly

# curl http://example.com/image.tar | crictl import - custom-image:v1.0


If using containerd directly

# ctr image import image.tar

Import with specific namespace

# ctr -n k8s.io image import image.tar



Venilla Kubernetes Cluster

Set Proper Hostnames

# vi /etc/hosts

<IP>    master

<IP>    worker

 

Disable swap space

# swapoff -a

Update in fstab

 

Disable SELinux

# vi /etc/selinux/config

SELINUX=disabled

 

 

Installing traffic control (tc) for the container

# dnf install -y iproute-tc

 

FW Rules between master and worker nodes

Master Node

# firewall-cmd --permanent --add-port=6443/tcp

# firewall-cmd --permanent --add-port=2379-2380/tcp

# firewall-cmd --permanent --add-port=10250/tcp

# firewall-cmd --permanent --add-port=10251/tcp

# firewall-cmd --permanent --add-port=10252/tcp

# firewall-cmd --reload

 

Worker Nodes

# firewall-cmd --permanent --add-port=10250/tcp

# firewall-cmd --permanent --add-port=30000-32767/tcp                                            

# firewall-cmd --reload

 

 

Enable kernel modules

overlay

br_netfilter

Create a module's configuration file

# vi /etc/modules-load.d/<k8s>.conf

overlay

br_netfilter

 

 

# modprobe overlay

# modprobe br_netfilte


Checking kernel module status

lsmod | grep overlay

lsmod | grep br_netfilter

 

sysctl parameters

# vi /etc/sysctl.d/<k8s>.conf

net.bridge.bridge-nf-call-iptables  = 1

net.ipv4.ip_forward                 = 1

net.bridge.bridge-nf-call-ip6tables = 1

 

# sysctl --system

 

CRI (Container Runtime Interface)

Install container runtime

A Container Runtime is an application that supports running containers. 

    Containerd

    CRI-O

    Docker Engine

 

Intallin

For crio runtime ref:- https://cri-o.io/

For contained ref:- https://containerd.io/docs/getting-started/

 

 

systemctl enable-now

systemctl start

 

Adding yum repos

Kubernetes yum repository. If you want to use a Kubernetes version different from v x.xx, replace v y.yy with the desired minor version in the command below.

 

 

# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/

enabled=1

gpgcheck=1

gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key

EOF

 

# dnf install kubelet kubeadm kubectl  --disableexcludes=kubernetes -y

# systemctl enable-now kubelet

# systemctl start kubelet

 

Prerequisite Done for K8S cluster 

 

Initialise a Kubernetes cluster using the kubeadm command

# kubeadm init --pod-network-cidr=192.168.10.0/16  # End of the command, you have the command to run on worker nodes to join the cluster.

This initialises a control plane in the master node. Once the control plane is created, start using the cluster.

# mkdir -p $HOME/.kube

# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

CNI (Container Network Interface)

You must use a CNI plugin that is compatible. When Kubernetes needs to create a new pod, it calls a CNI plug-in to handle the networking part.

It defines a set of APIs and a standard for plug-ins that provide and manage network connectivity for pods. 

CNI allows different network solutions (CNI plug-ins) to be used with Kubernetes without changing the core Kubernetes code.

The plug-in is responsible for:

 

    Creating a network interface for the pod.

    Assigning an IP address to the pod from a specific IP address range (CIDR).

    Connecting the pod's network interface to the host's network stack.

    Managing network routes and rules so the pod can communicate with other pods, services, and external networks.

 

plugin

Calico     
A robust, performant plugin that uses BGP (a standard internet routing protocol) for networking and offers powerful network policies


Flannel    
A very simple and easy-to-configure overlay network plugin. It's a great choice for getting started.


AWS VPC CNI
A specific plugin for Amazon EKS. It assigns Pods real IPs from the AWS VPC, deeply integrating with AWS networking.

 

Calico # https://docs.tigera.io/calico/latest/getting-started/

Flannel # https://github.com/flannel-io/flannel

 

# kubectl get pods -n kube-system

# kubectl get nodes

# kubectl get nodes -o wide

# kubectl get pods --all-namespaces

# kubectl cluster-info

# kubectl cluster-info dump

 

Adding a worker node to the cluster

# kubeadm join # the command we output from  Kubeadm init

After Successful

# kubectl get nodes # We will be able to see the worker nodes