Kubernetes : ノードを削除する2024/06/07 |
既存の Kubernetes クラスターからノードを削除する場合は以下のように設定します。
当例では以下のように 5 台のノードを使用してクラスターを構成しています。 +----------------------+ +----------------------+ | [ ctrl.srv.world ] | | [ dlp.srv.world ] | | Manager Node | | Control Plane | +-----------+----------+ +-----------+----------+ eth0|10.0.0.25 eth0|10.0.0.30 | | ------------+--------------------------+--------------------------+----------- | | | eth0|10.0.0.51 eth0|10.0.0.52 eth0|10.0.0.53 +-----------+----------+ +-----------+----------+ +-----------+----------+ | [ node01.srv.world ] | | [ node02.srv.world ] | | [ node03.srv.world ] | | Worker Node#1 | | Worker Node#2 | | Worker Node#3 | +----------------------+ +----------------------+ +----------------------+ |
|
[1] | Manager ノードでノード削除を実行します。 |
root@ctrl:~# kubectl get nodes NAME STATUS ROLES AGE VERSION dlp.srv.world Ready control-plane 3h40m v1.30.1 node01.srv.world Ready <none> 3h35m v1.30.1 node02.srv.world Ready <none> 3h34m v1.30.1 node03.srv.world Ready <none> 7m44s v1.30.1 # 対象ノードを安全に削除するための事前準備 # --ignore-daemonsets ⇒ DeamonSet の Pod は無視 # --delete-emptydir-data ⇒ emptyDir ボリュームを持つ Pod は無視 # --force ⇒ 単体で作成された Pod も削除 root@ctrl:~# kubectl drain node03.srv.world --ignore-daemonsets --delete-emptydir-data --force node/node03.srv.world cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-gl4k8, kube-system/kube-proxy-8c2hq node/node03.srv.world drained # 一定時間経過後に確認 # 時間は環境によって異なる root@ctrl:~# kubectl get nodes node03.srv.world NAME STATUS ROLES AGE VERSION node03.srv.world Ready,SchedulingDisabled <none> 9m6s v1.30.1 # 削除処理を実行 root@ctrl:~# kubectl delete node node03.srv.world node "node03.srv.world" deletedroot@ctrl:~# kubectl get nodes NAME STATUS ROLES AGE VERSION dlp.srv.world Ready control-plane 3h42m v1.30.1 node01.srv.world Ready <none> 3h37m v1.30.1 node02.srv.world Ready <none> 3h36m v1.30.1 |
[2] | 削除したノードで、kubeadm の設定をリセットしておきます。 |
root@node03:~# kubeadm reset
W0607 03:32:56.950681 7883 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0607 03:32:57.968829 7883 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
|
Sponsored Link |
|