Kubernetes : Add Control Plane Node2025/01/24 |
|
Add new Control Plane Nodes to existing Kubernetes Cluster.
This example is based on the cluster environment like follows.
*Note
+----------------------+ +----------------------+
| [ ctrl.srv.world ] | | [ dlp.srv.world ] |
| Manager Node | | Control Plane |
+-----------+----------+ +-----------+----------+
eth0|10.0.0.25 eth0|10.0.0.30
| |
------------+--------------------------+-----------
| |
eth0|10.0.0.51 eth0|10.0.0.52
+-----------+----------+ +-----------+----------+
| [ node01.srv.world ] | | [ node02.srv.world ] |
| Worker Node#1 | | Worker Node#2 |
+----------------------+ +----------------------+
|
| [1] |
On a new Node, Configure common settings to join in Cluster, refer to here. |
| [2] | Add proxy setting for new Control Plane on Manager Node. |
|
[root@ctrl ~]#
vi /etc/nginx/nginx.conf # add new Control Plane stream { upstream k8s-api { server 10.0.0.30:6443; server 10.0.0.31:6443; } server { listen 6443; proxy_pass k8s-api; } }[root@ctrl ~]# systemctl reload nginx
|
| [3] | Confirm join command on existing Control Plane Node and also transfer certificate files to new Node with any user. |
|
[root@dlp ~]#
[root@dlp pki]# cd /etc/kubernetes/pki [root@dlp pki]# tar czvf kube-certs.tar.gz sa.pub sa.key ca.crt ca.key front-proxy-ca.crt front-proxy-ca.key etcd/ca.crt etcd/ca.key [root@dlp pki]# scp kube-certs.tar.gz centos@10.0.0.31:/tmp
kubeadm token create --print-join-command kubeadm join 10.0.0.25:6443 --token noxm3q.pz7r7bw4sq882j20 --discovery-token-ca-cert-hash sha256:ac6990d8007cb72c8c1ea1105ddffbb3d9905e425309e8dd5a14f367771fb7d7 |
| [4] | Run join command you confirmed on a new Node with [--control-plane] option. |
|
# copy certificates transferred from existing Control Plane [root@dlp-1 ~]# mkdir /etc/kubernetes/pki [root@dlp-1 ~]# tar zxvf /tmp/kube-certs.tar.gz -C /etc/kubernetes/pki
kubeadm join 10.0.0.25:6443 --token noxm3q.pz7r7bw4sq882j20 \ --discovery-token-ca-cert-hash sha256:ac6990d8007cb72c8c1ea1105ddffbb3d9905e425309e8dd5a14f367771fb7d7 \ --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [dlp-1.srv.world localhost] and IPs [10.0.0.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
.....
.....
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
|
| [5] | Verify settings on Manager Node. That's OK if the status of new Node turns to [STATUS = Ready]. |
|
[root@ctrl ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION dlp-1.srv.world Ready control-plane 83s v1.32.6 dlp.srv.world Ready control-plane 8h v1.32.6 node01.srv.world Ready <none> 8h v1.32.6 node02.srv.world Ready <none> 8h v1.32.6[root@ctrl ~]# kubectl get pods -A -o wide | grep dlp-1 calico-system calico-node-cdfjw 1/1 Running 0 110s 10.0.0.31 dlp-1.srv.world <none> <none> calico-system csi-node-driver-5phrh 2/2 Running 0 110s 10.85.0.2 dlp-1.srv.world <none> <none> kube-system etcd-dlp-1.srv.world 1/1 Running 0 110s 10.0.0.31 dlp-1.srv.world <none> <none> kube-system kube-apiserver-dlp-1.srv.world 1/1 Running 0 110s 10.0.0.31 dlp-1.srv.world <none> <none> kube-system kube-controller-manager-dlp-1.srv.world 1/1 Running 0 110s 10.0.0.31 dlp-1.srv.world <none> <none> kube-system kube-proxy-kdj8v 1/1 Running 0 110s 10.0.0.31 dlp-1.srv.world <none> <none> kube-system kube-scheduler-dlp-1.srv.world 1/1 Running 0 110s 10.0.0.31 dlp-1.srv.world <none> <none> |
| Sponsored Link |
|
|