CentOS Stream 9
Sponsored Link

Kubernetes : Remove Nodes2023/10/20

 

Remove Nodes from existing Kubernetes Cluster.

[1] For example, remove [node03.srv.world] node.
[root@mgr ~]#
kubectl get nodes

NAME               STATUS   ROLES           AGE   VERSION
dlp-1.srv.world    Ready    control-plane   17h   v1.28.2
dlp.srv.world      Ready    control-plane   21h   v1.28.2
node01.srv.world   Ready    <none>          20h   v1.28.2
node02.srv.world   Ready    <none>          20h   v1.28.2
node03.srv.world   Ready    <none>          17h   v1.28.2

# prepare to remove a target node
# --ignore-daemonsets ⇒ ignore pods in DaemonSet
# --delete-emptydir-data ⇒ ignore pods that has emptyDir volumes
# --force ⇒ also remove pods that was created as a pod, not as deployment or others

[root@mgr ~]#
kubectl drain node03.srv.world --ignore-daemonsets --delete-emptydir-data --force

node/node03.srv.world cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-fqf2f, kube-system/kube-proxy-rlcbx
node/node03.srv.world drained

# verify a few minutes later

[root@mgr ~]#
kubectl get nodes node03.srv.world

NAME               STATUS                     ROLES    AGE   VERSION
node03.srv.world   Ready,SchedulingDisabled   <none>   17h   v1.28.2

# run delete method

[root@mgr ~]#
kubectl delete node node03.srv.world

node "node03.srv.world" deleted

[root@mgr ~]#
kubectl get nodes

NAME               STATUS   ROLES           AGE   VERSION
dlp-1.srv.world    Ready    control-plane   17h   v1.28.2
dlp.srv.world      Ready    control-plane   21h   v1.28.2
node01.srv.world   Ready    <none>          20h   v1.28.2
node02.srv.world   Ready    <none>          20h   v1.28.2
[2] On the removed Node, Reset kubeadm settings.
[root@node03 ~]#
kubeadm reset

W1020 09:46:46.476248   14397 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1020 09:46:47.739659   14397 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
Matched Content