CentOS Stream 9
Sponsored Link

Kubernetes : Add Control Plane Node2023/10/19

 

Add new Control Plane Nodes to existing Kubernetes Cluster.

This example is based on the cluster environment like follows.
It adds [dlp-1.srv.world (10.0.0.31)] as Control Plane Node to this cluster.

*Note
When etcd is started on the Control Plane, the fault tolerance of etcd is 0 for 1-2 units, so in a configuration with 2 Control Planes, if one of them goes down, it will no longer be possible to connect to etcd and the cluster will not be able to be used normally.

+----------------------+   +----------------------+
|   [ mgr.srv.world ]  |   |   [ dlp.srv.world ]  |
|     Manager Node     |   |     Control Plane    |
+-----------+----------+   +-----------+----------+
        eth0|10.0.0.25             eth0|10.0.0.30
            |                          |
------------+--------------------------+-----------
            |                          |
        eth0|10.0.0.51             eth0|10.0.0.52
+-----------+----------+   +-----------+----------+
| [ node01.srv.world ] |   | [ node02.srv.world ] |
|     Worker Node#1    |   |     Worker Node#2    |
+----------------------+   +----------------------+

[1]

On a new Node, Configure common settings to join in Cluster, refer to here.

[2] Add proxy setting for new Control Plane on Manager Node.
[root@mgr ~]#
vi /etc/nginx/nginx.conf
# add new Control Plane
stream {
    upstream k8s-api {
        server 10.0.0.30:6443;
        server 10.0.0.31:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-api;
    }
}

[root@mgr ~]#
systemctl reload nginx
[3] Confirm join command on existing Control Plane Node and also transfer certificate files to new Node with any user.
[root@dlp ~]#
cd /etc/kubernetes/pki

[root@dlp pki]#
tar czvf kube-certs.tar.gz sa.pub sa.key ca.crt ca.key front-proxy-ca.crt front-proxy-ca.key etcd/ca.crt etcd/ca.key

[root@dlp pki]#
scp kube-certs.tar.gz centos@10.0.0.31:~/
[root@dlp pki]#
kubeadm token create --print-join-command

kubeadm join 10.0.0.25:6443 --token zwaoit.d7983fprikz2turh --discovery-token-ca-cert-hash sha256:8a8bd725c9cbf8d03c0a724bded0afb923a067d48ca50fd8f0346fd3d0a27b6e
[4] Run join command you confirmed on a new Node with [--control-plane] option.
# copy certificates transferred from existing Control Plane

[root@dlp-1 ~]#
mkdir /etc/kubernetes/pki

[root@dlp-1 ~]#
tar zxvf /home/centos/kube-certs.tar.gz -C /etc/kubernetes/pki
# if Firewalld is running, allow related services

[root@dlp-1 ~]#
firewall-cmd --add-service={kube-apiserver,kube-control-plane,kube-control-plane-secure,kubelet,kubelet-readonly,http,https}

success
[root@dlp-1 ~]#
firewall-cmd --runtime-to-permanent

success
[root@dlp-1 ~]#
kubeadm join 10.0.0.25:6443 --token zwaoit.d7983fprikz2turh \
--discovery-token-ca-cert-hash sha256:8a8bd725c9cbf8d03c0a724bded0afb923a067d48ca50fd8f0346fd3d0a27b6e \
--control-plane

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[ 1470.582145] overlayfs: idmapped layers are currently not supported
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost www.srv.world] and IPs [10.0.0.31 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key

.....
.....

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
[5] Verify settings on Manager Node. That's OK if the status of new Node turns to [STATUS = Ready].
[root@mgr ~]#
kubectl get nodes

NAME               STATUS   ROLES           AGE    VERSION
dlp-1.srv.world    Ready    control-plane   87s    v1.28.2
dlp.srv.world      Ready    control-plane   176m   v1.28.2
node01.srv.world   Ready    <none>          112m   v1.28.2
node02.srv.world   Ready    <none>          111m   v1.28.2

[root@mgr ~]#
kubectl get pods -A -o wide | grep dlp-1

kube-system            calico-node-gj68v                              0/1     Running   0               3m5s    10.0.0.31         dlp-1.srv.world      <none>           <none>
kube-system            etcd-dlp-1.srv.world                           1/1     Running   0               3m4s    10.0.0.31         dlp-1.srv.world      <none>           <none>
kube-system            kube-apiserver-dlp-1.srv.world                 1/1     Running   0               3m5s    10.0.0.31         dlp-1.srv.world      <none>           <none>
kube-system            kube-controller-manager-dlp-1.srv.world        1/1     Running   0               3m5s    10.0.0.31         dlp-1.srv.world      <none>           <none>
kube-system            kube-proxy-wmmr4                               1/1     Running   0               3m5s    10.0.0.31         dlp-1.srv.world      <none>           <none>
kube-system            kube-scheduler-dlp-1.srv.world                 1/1     Running   0               3m5s    10.0.0.31         dlp-1.srv.world      <none>           <none>
Matched Content