Managing a Kubernetes cluster involves more than just deploying workloads—sometimes, we need to scale down or perform maintenance by removing nodes. Recently, we had to remove a node from our MicroK8s1-managed Kubernetes cluster. While it might seem straightforward, it's important to do this safely to avoid data loss or service disruption. Here’s the step-by-step process2 we followed, from draining the node to fully removing it, ensuring our cluster remained stable and operational throughout.
Step-by-step guide
List nodes
First, we listed the nodes to identify the one we needed to remove:
kubectl get nodes
Drain the node
Before removal, we drained the node to safely evict workloads:
kubectl drain vm01 --ignore-daemonsets --delete-local-data
Delete the node
After draining, we deleted the node entirely from Kubernetes:
kubectl delete node vm01
Remove the node
Finally, the node can leave the cluster:
sudo microk8s leave
Verify the node removal
To make sure everything was clean, we checked the nodes again:
kubectl get nodes
This process ensures that workloads are safely handled, and the node is fully removed without leaving any dangling resources.
Summary
By following these steps, we successfully removed a node from our Kubernetes cluster using MicroK8s without causing disruptions. Ensuring the node was drained, properly removed, and deleted from the system helped maintain cluster stability. Handling node removal with care is key to smooth Kubernetes management, and this process ensures workloads continue running without issues. If you're looking to scale or maintain your cluster, these steps will help guide you through it seamlessly.