Kubernetes : Remove Nodes2022/11/03 |
Remove Nodes from existing Kubernetes Cluster.
This example is based on the environment like follows and remove a Node [snode03.srv.world] from it.
-----------+---------------------------+--------------------------+--------------+ | | | | eth0|10.0.0.25 eth0|10.0.0.71 eth0|10.0.0.72 | +----------+-----------+ +-----------+-----------+ +-----------+-----------+ | | [ ctrl.srv.world ] | | [snode01.srv.world] | | [snode02.srv.world] | | | Control Plane | | Worker Node | | Worker Node | | +----------------------+ +-----------------------+ +-----------------------+ | | ------------+--------------------------------------------------------------------+ | eth0|10.0.0.73 +-----------+-----------+ | [snode03.srv.world] | | Worker Node | +-----------------------+ |
[1] | Remove a Node on Master Node. |
# prepare to remove a target node # --ignore-daemonsets ⇒ ignore pods in DaemonSet # --delete-emptydir-data ⇒ ignore pods that has emptyDir volumes # --force ⇒ also remove pods that was created as a pod, not as deployment or others root@ctrl:~# kubectl drain snode03.srv.world --ignore-daemonsets --delete-emptydir-data --force Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-762x8, kube-system/kube-proxy-lm25n node/snode03.srv.world drained # verify a few minutes later root@ctrl:~# kubectl get nodes snode03.srv.world NAME STATUS ROLES AGE VERSION snode03.srv.world Ready,SchedulingDisabled <none> 14m v1.25.3 # run delete method root@ctrl:~# kubectl delete node snode03.srv.world node "snode03.srv.world" deletedroot@ctrl:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ctrl.srv.world Ready control-plane 26h v1.25.3 snode01.srv.world Ready <none> 25h v1.25.3 snode02.srv.world Ready <none> 25h v1.25.3 |
[2] | On the removed Node, Reset kubeadm settings. |
root@snode03:~# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
W1103 04:37:53.839439 7438 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
|
Sponsored Link |
|