CentOS 7
Sponsored Link

Kubernetes : ノードを削除する2018/04/15

既存の Kubernetes クラスターからノードを削除する場合は以下のように設定します。
以下のように 4台のノードで構成している Kubernetes クラスターから [node03.srv.world] ノードを削除します。
           |                           |                          |             |
       eth0|              eth0|             eth0|    |
+----------+-----------+   +-----------+----------+   +-----------+----------+  |
|   [ dlp.srv.world ]  |   | [ node01.srv.world ] |   | [ node02.srv.world ] |  |
|      Master Node     |   |      Worker Node     |   |      Worker Node     |  |
+----------------------+   +----------------------+   +----------------------+  |
                                                      | [ node03.srv.world ] |
                                                      |      Worker Node     |

[1] Master ノードで対象ノードを削除します。
# 対象ノードを安全に削除するための事前準備

# --ignore-daemonsets ⇒ DeamonSet の Pod は無視

# --delete-local-data ⇒ emptyDir ボリュームを持つ Pod は無視

# --force ⇒ 単体で作成された Pod も削除

[root@dlp ~]#
kubectl drain node03.srv.world --ignore-daemonsets --delete-local-data --force

node/node03.srv.world drained

# 一定時間経過後に確認

# 時間は環境によって異なる

[root@dlp ~]#
kubectl get nodes node03.srv.world

NAME               STATUS                     ROLES    AGE     VERSION
node03.srv.world   Ready,SchedulingDisabled   <none>   7m15s   v1.21.1

# 削除処理を実行

[root@dlp ~]#
kubectl delete node node03.srv.world

node "node03.srv.world" deleted

[root@dlp ~]#
kubectl get nodes

NAME               STATUS   ROLES                  AGE   VERSION
dlp.srv.world      Ready    control-plane,master   16h   v1.21.1
node01.srv.world   Ready    <none>                 16h   v1.21.1
node02.srv.world   Ready    <none>                 16h   v1.21.1
[2] 削除したノードで、kubeadm の設定をリセットしておきます。
[root@node03 ~]#
kubeadm reset

[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0525 08:56:14.574344   12039 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.