Debian 12 bookworm
Sponsored Link

Kubernetes : Remove Nodes2023/07/28

 

Remove Nodes from existing Kubernetes Cluster.

This example is based on the environment like follows and remove a Node [snode03.srv.world] from it.

-----------+---------------------------+--------------------------+--------------+
           |                           |                          |              |
       eth0|10.0.0.25              eth0|10.0.0.71             eth0|10.0.0.72     |
+----------+-----------+   +-----------+-----------+   +-----------+-----------+ |
|  [ ctrl.srv.world ]  |   |  [snode01.srv.world]  |   |  [snode02.srv.world]  | |
|     Control Plane    |   |      Worker Node      |   |      Worker Node      | |
+----------------------+   +-----------------------+   +-----------------------+ |
                                                                                 |
------------+--------------------------------------------------------------------+
            |
        eth0|10.0.0.73
+-----------+-----------+
|  [snode03.srv.world]  |
|      Worker Node      |
+-----------------------+

[1] Remove a Node on Master Node.
# prepare to remove a target node
# --ignore-daemonsets ⇒ ignore pods in DaemonSet
# --delete-emptydir-data ⇒ ignore pods that has emptyDir volumes
# --force ⇒ also remove pods that was created as a pod, not as deployment or others

root@ctrl:~#
kubectl drain snode03.srv.world --ignore-daemonsets --delete-emptydir-data --force

Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-b9xkk, kube-system/kube-proxy-86p6p
node/snode03.srv.world drained

# verify a few minutes later

root@ctrl:~#
kubectl get nodes snode03.srv.world

NAME                STATUS                     ROLES    AGE    VERSION
snode03.srv.world   Ready,SchedulingDisabled   <none>   6m4s   v1.26.6

# run delete method

root@ctrl:~#
kubectl delete node snode03.srv.world

node "snode03.srv.world" deleted

root@ctrl:~#
kubectl get nodes

NAME                STATUS   ROLES           AGE     VERSION
ctrl.srv.world      Ready    control-plane   5h49m   v1.26.6
snode01.srv.world   Ready    <none>          56m     v1.26.6
snode02.srv.world   Ready    <none>          56m     v1.26.6
[2] On the removed Node, Reset kubeadm settings.
root@snode03:~#
kubeadm reset

[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0728 00:55:59.745953    5201 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
Matched Content