Building a highly available Kubernetes (K8s) cluster involves several key steps, from enabling high-availability mode to handling networking, storage, and security. Recently, we configured a robust HA cluster using MicroK8s1, Kube-VIP2, and NFS. This process ensures redundancy, load balancing, and persistent storage across multiple nodes. Below is a detailed account of how we successfully set up this cluster, including kernel module configuration, cert regeneration, NFS and IPVS3 setup, and more.
Preparing nodes
In the context of preparing nodes for a highly available Kubernetes cluster, the process mainly involves installing essential packages like nfs-common for shared storage and ipvsadm to enable IPVS-based load balancing. These tools are crucial for proper networking and storage configurations within the cluster. After installation, the required IPVS kernel modules are auto-loaded by creating a configuration file that ensures these modules are available every time the system boots, optimizing the cluster’s performance for high availability and load balancing.
Install nfs-common
This package is necessary for shared storage over the network, which enables distributed applications to access the same storage from multiple nodes.
sudo apt install -y nfs-common
Install ipvsadm
This tool is required for configuring load balancing using IPVS (IP Virtual Server), which is more efficient for handling traffic across Kubernetes nodes.
sudo apt install -y ipvsadm
Auto-load IPVS modules
To ensure IPVS modules load automatically upon boot, create a configuration file:
sudo tee /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
Load IPVS modules immediately
After creating the configuration file, you can load the modules without rebooting by running:
sudo systemctl restart systemd-modules-load.service
This setup prepares the nodes for smooth integration into a Kubernetes cluster with high availability, shared storage, and efficient load balancing.
Bootstrapping the first node
Bootstrapping the first node in a highly available Kubernetes cluster involves several critical steps to establish the foundational infrastructure. This process begins with installing MicroK8s and configuring essential add-ons like DNS, HA cluster or Helm.
Afterward, the node’s IP addresses and domain names are added to the CSR template to ensure proper network identification. Certificates are then regenerated to reflect these changes. Finally, Kube-VIP is configured to manage a virtual IPs, preparing the node for high availability once additional nodes join the cluster.
Install MicroK8s
Begin by installing MicroK8s on the first node:
sudo snap install microk8s --classic --channel=1.31/stable
Install required add-ons
Enable the necessary add-ons like dns, HA cluster or Helm and any other services required for your setup:
sudo microk8s enable dns ha-cluster helm
Configure IPs and domains
Edit the CSR configuration file to add IP addresses and domain names:
sudo vim /var/snap/microk8s/current/certs/csr.conf.template
In the [ alt_names ]
section, add your IP and DNS:
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.11 = k8s.slys.dev # My domain
DNS.12 = k8s.maas # My local domain
IP.1 = 127.0.0.1
IP.2 = 10.152.183.1
IP.11 = 192.168.111.10 # control plane IP
IP.12 = 192.168.111.11 # user plane IP
Regenerate certificates
After updating the configuration, regenerate the certificates:
sudo microk8s refresh-certs --cert server.crt
Set up Kube-VIP
To ensure high availability later, configure Kube-VIP for managing the virtual IP across nodes:
helm repo add kube-vip https://kube-vip.github.io/helm-charts
helm repo update
helm install kube-vip kube-vip/kube-vip \
--namespace kube-system \
--create-namespace \
-f -<<EOF
config:
address: "192.168.111.10"
env:
vip_interface: "enp5s0" # This is the interface, same on all nodes where the VIP is announced
vip_arp: "true" # mandatory for L2 mode
lb_enable: "true"
lb_port: "16443" # changed as microk8s uses 16443 instead of 6443
vip_cidr: "24"
cp_enable: "true" # enable control plane load balancing
svc_enable: "true" # enable user plane load balancing
vip_leaderelection: "true" # mandatory for L2 mode
svc_election: "true"
EOF
Check node status
Verify that the node is operating correctly:
sudo microk8s status
Once two more nodes join, HA mode will be automatically enabled.
Bootstrapping other nodes
Bootstrapping additional nodes in a highly available Kubernetes cluster involves several critical steps to ensure seamless integration with the first node. The process begins by installing MicroK8s on each new node, followed by configuring IP addresses and DNS entries to ensure proper network communication. After making these adjustments, certificates are regenerated to reflect the new settings. The first node then generates a join command using microk8s add-node
, which is executed on the additional nodes to successfully join the cluster and contribute to its high availability configuration.
Install MicroK8s
Install MicroK8s on each additional node:
sudo snap install microk8s --classic --channel=1.31/stable
Configure IPs and domains
Edit the CSR configuration file to add IP addresses and DNS entries:
sudo vim /var/snap/microk8s/current/certs/csr.conf.template
Add IP and DNS values under the [ alt_names ]
section.
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.11 = k8s.slys.dev # My domain
DNS.12 = k8s.maas # My local domain
IP.1 = 127.0.0.1
IP.2 = 10.152.183.1
IP.11 = 192.168.111.10 # control plane IP
IP.12 = 192.168.111.11 # user plane IP
Regenerate certificates
After modifying the CSR template, regenerate the certificates:
sudo microk8s refresh-certs --cert server.crt
Trigger add-node
command on the first node
On the first node, generate the join command by running:
sudo microk8s add-node
You should get the following output:
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.111.206:25000/d703229deeeb79e9c2bbbbf786f302c3/97d5abd11d20
Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.111.206:25000/d703229deeeb79e9c2bbbbf786f302c3/97d5abd11d20 --worker
If the node you are adding is not reachable through the default interface you can use one of the fol
lowing:
microk8s join 192.168.111.206:25000/d703229deeeb79e9c2bbbbf786f302c3/97d5abd11d20
microk8s join 192.168.111.10:25000/d703229deeeb79e9c2bbbbf786f302c3/97d5abd11d20
Execute join command on other nodes
On each additional node, execute the generated join command from the first node to join the cluster:
Each time we join a new node, we have to obtain new join command.
sudo microk8s join 192.168.111.10:25000/d703229deeeb79e9c2bbbbf786f302c3/97d5abd11d20
Once completed, the nodes will join the cluster as active members, contributing to the cluster's high availability setup.
Summary
In conclusion, successfully bootstrapping a highly available Kubernetes cluster is crucial for building a resilient and scalable infrastructure. By carefully configuring each node, ensuring proper network settings, and enabling high availability features, you create a robust environment capable of handling failures and balancing workloads efficiently. This process lays the foundation for long-term stability and scalability, ensuring your Kubernetes deployment is both reliable and future-proof for evolving demands. Full demo here4.