Set Proper Hostnames
# vi /etc/hosts
<IP> master
<IP> worker
Disable swap space
# swapoff -a
Update in fstab
Disable SELinux
# vi /etc/selinux/config
SELINUX=disabled
Installing traffic control (tc) for the container
# dnf install -y iproute-tc
FW Rules between master and worker nodes
Master Node
# firewall-cmd --permanent --add-port=6443/tcp
# firewall-cmd --permanent --add-port=2379-2380/tcp
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=10251/tcp
# firewall-cmd --permanent --add-port=10252/tcp
# firewall-cmd --reload
Worker Nodes
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=30000-32767/tcp
# firewall-cmd --reload
Enable kernel modules
overlay
br_netfilter
Create a module's configuration file
# vi /etc/modules-load.d/<k8s>.conf
overlay
br_netfilter
# modprobe overlay
# modprobe br_netfilte
Checking kernel module status
lsmod | grep overlay
lsmod | grep br_netfilter
sysctl parameters
# vi /etc/sysctl.d/<k8s>.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
# sysctl --system
CRI (Container Runtime Interface)
Install container runtime
A Container Runtime is an application that supports running containers.
Containerd
CRI-O
Docker Engine
Intallin
For crio runtime ref:- https://cri-o.io/
For contained ref:- https://containerd.io/docs/getting-started/
systemctl enable-now
systemctl start
Adding yum repos
Kubernetes yum repository. If you want to use a Kubernetes version different from v x.xx, replace v y.yy with the desired minor version in the command below.
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key
EOF
# dnf install kubelet kubeadm kubectl --disableexcludes=kubernetes -y
# systemctl enable-now kubelet
# systemctl start kubelet
Prerequisite Done for K8S cluster
Initialise a Kubernetes cluster using the kubeadm command
# kubeadm init --pod-network-cidr=192.168.10.0/16 # End of the command, you have the command to run on worker nodes to join the cluster.
This initialises a control plane in the master node. Once the control plane is created, start using the cluster.
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
CNI (Container Network Interface)
You must use a CNI plugin that is compatible. When Kubernetes needs to create a new pod, it calls a CNI plug-in to handle the networking part.
It defines a set of APIs and a standard for plug-ins that provide and manage network connectivity for pods.
CNI allows different network solutions (CNI plug-ins) to be used with Kubernetes without changing the core Kubernetes code.
The plug-in is responsible for:
Creating a network interface for the pod.
Assigning an IP address to the pod from a specific IP address range (CIDR).
Connecting the pod's network interface to the host's network stack.
Managing network routes and rules so the pod can communicate with other pods, services, and external networks.
plugin
Calico      
A robust, performant plugin that uses BGP (a standard internet routing protocol) for networking and offers powerful network policies
Flannel    
A very simple and easy-to-configure overlay network plugin. It's a great choice for getting started.
AWS VPC CNI 
A specific plugin for Amazon EKS. It assigns Pods real IPs from the AWS VPC, deeply integrating with AWS networking.
Calico # https://docs.tigera.io/calico/latest/getting-started/
Flannel # https://github.com/flannel-io/flannel
# kubectl get pods -n kube-system
# kubectl get nodes
# kubectl get nodes -o wide
# kubectl get pods --all-namespaces
# kubectl cluster-info
# kubectl cluster-info dump
Adding a worker node to the cluster
# kubeadm join # the command we output from Kubeadm init
After Successful
# kubectl get nodes # We will be able to see the worker nodes
No comments:
Post a Comment