Ubuntu 24.04
Sponsored Link

Kubernetes : नोड्स हटाएँ2024/06/07

 

मौजूदा Kubernetes क्लस्टर से नोड्स निकालें।

यह उदाहरण क्लस्टर वातावरण पर आधारित है, जो इस प्रकार है।
यह इस क्लस्टर से [node03.srv.world (10.0.0.53)] को हटा देता है।

+----------------------+   +----------------------+
|  [ ctrl.srv.world ]  |   |   [ dlp.srv.world ]  |
|     Manager Node     |   |     Control Plane    |
+-----------+----------+   +-----------+----------+
        eth0|10.0.0.25             eth0|10.0.0.30
            |                          |
------------+--------------------------+--------------------------+-----------
            |                          |                          |
        eth0|10.0.0.51             eth0|10.0.0.52             eth0|10.0.0.53
+-----------+----------+   +-----------+----------+   +-----------+----------+
| [ node01.srv.world ] |   | [ node02.srv.world ] |   | [ node03.srv.world ] |
|     Worker Node#1    |   |     Worker Node#2    |   |     Worker Node#3    |
+----------------------+   +----------------------+   +----------------------+

[1] प्रबंधक नोड पर कार्य करें।
root@ctrl:~#
kubectl get nodes

NAME               STATUS   ROLES           AGE     VERSION
dlp.srv.world      Ready    control-plane   3h40m   v1.30.1
node01.srv.world   Ready    <none>          3h35m   v1.30.1
node02.srv.world   Ready    <none>          3h34m   v1.30.1
node03.srv.world   Ready    <none>          7m44s   v1.30.1

# लक्ष्य नोड को हटाने के लिए तैयार रहें
# --ignore-daemonsets ⇒ डेमनसेट में पॉड्स को अनदेखा करें
# --delete-emptydir-data ⇒ उन पॉड्स को अनदेखा करें जिनमें emptyDir वॉल्यूम है
# --force ⇒ उन पॉड्स को भी हटा दें जिन्हें पॉड के रूप में बनाया गया था, न कि परिनियोजन या अन्य के रूप में

root@ctrl:~#
kubectl drain node03.srv.world --ignore-daemonsets --delete-emptydir-data --force

node/node03.srv.world cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-gl4k8, kube-system/kube-proxy-8c2hq
node/node03.srv.world drained

# कुछ मिनट बाद सत्यापित करें

root@ctrl:~#
kubectl get nodes node03.srv.world

NAME               STATUS                     ROLES    AGE    VERSION
node03.srv.world   Ready,SchedulingDisabled   <none>   9m6s   v1.30.1

# डिलीट विधि चलाएँ

root@ctrl:~#
kubectl delete node node03.srv.world

node "node03.srv.world" deleted

root@ctrl:~#
kubectl get nodes

NAME               STATUS   ROLES           AGE     VERSION
dlp.srv.world      Ready    control-plane   3h42m   v1.30.1
node01.srv.world   Ready    <none>          3h37m   v1.30.1
node02.srv.world   Ready    <none>          3h36m   v1.30.1
[2] हटाए गए नोड पर, kubeadm सेटिंग्स रीसेट करें।
root@node03:~#
kubeadm reset

W0607 03:32:56.950681    7883 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0607 03:32:57.968829    7883 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
मिलान सामग्री