Sunday, September 7, 2025

iTerm

 When you open iTerm 



zsh compinit: insecure directories, run compaudit for list.

Ignore insecure directories and continue [y] or abort compinit [n]? ncompinit: initialization aborted




alagarasanramadoss@mbp ~ % compaudit

There are insecure directories:

/usr/local/share/zsh/site-functions

/usr/local/share/zsh

alagarasanramadoss@mbp ~ %


alagarasanramadoss@mbp ~ % autoload

 sudo chown -R root:wheel /usr/local/share/zsh /usr/local/share/zsh/site-functions

sudo chmod -R 755 /usr/local/share/zsh /usr/local/share/zsh/site-functions


Password:

alagarasanramadoss@mbp ~ % autoload -Uz compinit && compinit

Sunday, August 24, 2025

Container Runtime Interface

Install CRI-O (Container Runtime) 

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes.


runc:

runc is a low-level container runtime that directly interacts with the Linux kernel to create and run containers.

runc provides the basic functionality for creating and running containers, while containerd provides a more complete environment for managing and orchestrating container workloads.

ctr is unsupported debug and administrative client for interacting with the containerd daemon.

Because it is unsupported, the commands,options, and operations are not guaranteed to be backward compatible or stable from release to release of the containerd project.


Linux OS -> runc -> CRI


https://github.com/cri-o/cri-o/releases

https://github.com/opencontainers/runtime-tools

https://github.com/kubernetes-sigs/cri-tools/releases


# dnf install container-selinux cri-o cri-tools

# systemctl enable --now crio


Socket File

/var/run/crio/crio.sock 


Install Containerd

Containerd is an open-source CRI (Container Runtime Interface) compatible container runtime. It is created by Docker and donated to CNCF.


# dnf install -y yum-utils

# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# dnf install -y containerd.io


Generate a default configuration file for Containerd, and then we need to modify it as needed 

SystemdCgroup to true 


# mkdir -p /etc/containerd

# containerd config default > /etc/containerd/config.toml

Edit /etc/containerd/config.toml if necessary, for example:

# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# systemctl enable --now containerd

Socket File
var/run/containerd/containerd.sock

Install cri-tools

# curl -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-${VERSION}-linux-${ARCH}.tar.gz --output crictl-${VERSION}-linux-${ARCH}.tar.gz

Extract the downloaded archive and move the crictl binary to a directory within your system's PATH, such as /usr/local/bin/

# tar zxvf crictl-${VERSION}-linux-${ARCH}.tar.gz -C /usr/local/bin

# crictl --version

# crictl info

# crictl info | grep -i containerd







Pull Image

# crictl pull  hello-world:latest

# crictl pull  alpine:latest

List container images 

# crictl images

# crictl images nginx

# crictl images -q # List Image Ids



List all containers:

# crictl ps -a

# crictl ps # List Running Containers


Execute a command in a running container

# crictl exec -i -t <Container-ID> ls

Get Container Logs

# crictl logs <Container-ID>

# crictl logs --tail=1 <Container-ID>


# crictl stop <Container-ID>

# crictl stats <Container-ID>

# crictl inspect <Container-ID>

# crictl rm 


List Pod resourse usage statistics

# crictl statsp <Container-ID>


To set container registries and set priority, edit the file:

# vi /etc/containers/registries.conf

eg 

unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "registry.centos.org", "docker.io"]



List pods

crictl pods

crictl pods --name <name>


# ctr events


Port Forward

crictl port-forward command allows you to forward local ports to containers running in a Kubernetes CRI (Container Runtime Interface) environment. Here's a comprehensive guide:

# crictl port-forward <container-id> [local-port]:<container-port>


Forward local port 8080 to container port 80

# crictl port-forward <container-id> 8080:80 & $ (&) for background


# Let crictl choose an available local port

crictl port-forward <container-id> :80 


Forward Multiple Ports

Forward multiple ports simultaneously

# crictl port-forward <container-id> 8080:80 8443:443

Pull directly from a registry 

# crictl pull docker.io/nginx:latest


Import a tarball with a specific image name and tag

# crictl import ubuntu-container.tar ubuntu-custom:latest


curl to download and import directly

# curl http://example.com/image.tar | crictl import - custom-image:v1.0


If using containerd directly

# ctr image import image.tar

Import with specific namespace

# ctr -n k8s.io image import image.tar



Venilla Kubernetes Cluster

Set Proper Hostnames

# vi /etc/hosts

<IP>    master

<IP>    worker

 

Disable swap space

# swapoff -a

Update in fstab

 

Disable SELinux

# vi /etc/selinux/config

SELINUX=disabled

 

 

Installing traffic control (tc) for the container

# dnf install -y iproute-tc

 

FW Rules between master and worker nodes

Master Node

# firewall-cmd --permanent --add-port=6443/tcp

# firewall-cmd --permanent --add-port=2379-2380/tcp

# firewall-cmd --permanent --add-port=10250/tcp

# firewall-cmd --permanent --add-port=10251/tcp

# firewall-cmd --permanent --add-port=10252/tcp

# firewall-cmd --reload

 

Worker Nodes

# firewall-cmd --permanent --add-port=10250/tcp

# firewall-cmd --permanent --add-port=30000-32767/tcp                                            

# firewall-cmd --reload

 

 

Enable kernel modules

overlay

br_netfilter

Create a module's configuration file

# vi /etc/modules-load.d/<k8s>.conf

overlay

br_netfilter

 

 

# modprobe overlay

# modprobe br_netfilte


Checking kernel module status

lsmod | grep overlay

lsmod | grep br_netfilter

 

sysctl parameters

# vi /etc/sysctl.d/<k8s>.conf

net.bridge.bridge-nf-call-iptables  = 1

net.ipv4.ip_forward                 = 1

net.bridge.bridge-nf-call-ip6tables = 1

 

# sysctl --system

 

CRI (Container Runtime Interface)

Install container runtime

A Container Runtime is an application that supports running containers. 

    Containerd

    CRI-O

    Docker Engine

 

Intallin

For crio runtime ref:- https://cri-o.io/

For contained ref:- https://containerd.io/docs/getting-started/

 

 

systemctl enable-now

systemctl start

 

Adding yum repos

Kubernetes yum repository. If you want to use a Kubernetes version different from v x.xx, replace v y.yy with the desired minor version in the command below.

 

 

# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/

enabled=1

gpgcheck=1

gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key

EOF

 

# dnf install kubelet kubeadm kubectl  --disableexcludes=kubernetes -y

# systemctl enable-now kubelet

# systemctl start kubelet

 

Prerequisite Done for K8S cluster 

 

Initialise a Kubernetes cluster using the kubeadm command

# kubeadm init --pod-network-cidr=192.168.10.0/16  # End of the command, you have the command to run on worker nodes to join the cluster.

This initialises a control plane in the master node. Once the control plane is created, start using the cluster.

# mkdir -p $HOME/.kube

# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

CNI (Container Network Interface)

You must use a CNI plugin that is compatible. When Kubernetes needs to create a new pod, it calls a CNI plug-in to handle the networking part.

It defines a set of APIs and a standard for plug-ins that provide and manage network connectivity for pods. 

CNI allows different network solutions (CNI plug-ins) to be used with Kubernetes without changing the core Kubernetes code.

The plug-in is responsible for:

 

    Creating a network interface for the pod.

    Assigning an IP address to the pod from a specific IP address range (CIDR).

    Connecting the pod's network interface to the host's network stack.

    Managing network routes and rules so the pod can communicate with other pods, services, and external networks.

 

plugin

Calico     
A robust, performant plugin that uses BGP (a standard internet routing protocol) for networking and offers powerful network policies


Flannel    
A very simple and easy-to-configure overlay network plugin. It's a great choice for getting started.


AWS VPC CNI
A specific plugin for Amazon EKS. It assigns Pods real IPs from the AWS VPC, deeply integrating with AWS networking.

 

Calico # https://docs.tigera.io/calico/latest/getting-started/

Flannel # https://github.com/flannel-io/flannel

 

# kubectl get pods -n kube-system

# kubectl get nodes

# kubectl get nodes -o wide

# kubectl get pods --all-namespaces

# kubectl cluster-info

# kubectl cluster-info dump

 

Adding a worker node to the cluster

# kubeadm join # the command we output from  Kubeadm init

After Successful

# kubectl get nodes # We will be able to see the worker nodes