openSUSE Leap 16

Kubernetes : Control Plane ノードを追加する2025/11/07

 

既存の Kubernetes クラスターにノードを新規に Control Plane を追加する場合は以下のように設定します。

当例では以下のように 4 台のノードを使用してクラスターを構成しています。
ここへ、新たに [dlp-1.srv.world (10.0.0.31)] を Control Plane ノードとして追加します。

※ 注 : Control Plane で etcd を起動している場合、etcd の耐障害性は 1-2 台では 0 のため、Control Plane 2 台構成では、いずれかがダウンすると etcd に接続できなくなり、クラスターを正常利用できなくなります。
(etcd は 3-4 台で耐障害性 1, 5-6 台で 耐障害性 2 と増加)

+----------------------+   +----------------------+
|  [ ctrl.srv.world ]  |   |   [ dlp.srv.world ]  |
|     Manager Node     |   |     Control Plane    |
+-----------+----------+   +-----------+----------+
        eth0|10.0.0.25             eth0|10.0.0.30
            |                          |
------------+--------------------------+-----------
            |                          |
        eth0|10.0.0.51             eth0|10.0.0.52
+-----------+----------+   +-----------+----------+
| [ node01.srv.world ] |   | [ node02.srv.world ] |
|     Worker Node#1    |   |     Worker Node#2    |
+----------------------+   +----------------------+

[1]

新規に追加するノードで、こちらを参考に、ノード共通の設定を適用しておきます

[2] Manager ノードで、新規 Control Plane ノード用のプロキシ設定を追加します。
ctrl:~ #
vi /etc/nginx/nginx.conf
# 新規 Control Plane を追加
stream {
    upstream k8s-api {
        server 10.0.0.30:6443;
        server 10.0.0.31:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-api;
    }
}

ctrl:~ #
systemctl reload nginx
[3] 既存の Control Plane ノードで認証トークンを確認して、証明書を任意のユーザーで新規ノードへ転送しておきます。
dlp:~ #
cd /etc/kubernetes/pki

dlp:/etc/kubernetes/pki #
tar czvf kube-certs.tar.gz sa.pub sa.key ca.crt ca.key front-proxy-ca.crt front-proxy-ca.key etcd/ca.crt etcd/ca.key

dlp:/etc/kubernetes/pki #
scp kube-certs.tar.gz suse@10.0.0.31:/tmp
dlp:/etc/kubernetes/pki #
kubeadm token create --print-join-command

kubeadm join 10.0.0.25:6443 --token h4mouw.7v2pb9qmhujv9u6v --discovery-token-ca-cert-hash sha256:fbbf173149bc275b687e3a772b81d2e127cbbcd49a5546902e1a99f7e37d4729
[4] 新規に追加するノードで、Control Plane ノードで確認した認証トークン用コマンドに [--control-plane] オプションを付加して実行します。
# 転送した証明書をコピー

dlp-1:~ #
mkdir /etc/kubernetes/pki

dlp-1:~ #
tar zxvf /tmp/kube-certs.tar.gz -C /etc/kubernetes/pki
dlp-1:~ #
kubeadm join 10.0.0.25:6443 --token h4mouw.7v2pb9qmhujv9u6v \
--discovery-token-ca-cert-hash sha256:fbbf173149bc275b687e3a772b81d2e127cbbcd49a5546902e1a99f7e37d4729 \
--control-plane

[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp-1.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.31 10.0.0.25]
[certs] Generating "apiserver-kubelet-client" certificate and key

.....
.....

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
[5] Manager ノードでノード情報を確認しておきます。新規追加ノードが STATUS = Ready であれば OK です。
ctrl:~ #
kubectl get nodes

NAME               STATUS   ROLES           AGE   VERSION
dlp-1.srv.world    Ready    control-plane   80s   v1.34.1
dlp.srv.world      Ready    control-plane   67m   v1.34.1
node01.srv.world   Ready    <none>          58m   v1.34.1
node02.srv.world   Ready    <none>          54m   v1.34.1

ctrl:~ #
kubectl get pods -A -o wide | grep dlp-1

calico-system          calico-node-ls6jc                                       1/1     Running   0             109s   10.0.0.31         dlp-1.srv.world    <none>           <none>
calico-system          csi-node-driver-bzm98                                   2/2     Running   0             109s   192.168.112.129   dlp-1.srv.world    <none>           <none>
kube-system            etcd-dlp-1.srv.world                                    1/1     Running   0             107s   10.0.0.31         dlp-1.srv.world    <none>           <none>
kube-system            kube-apiserver-dlp-1.srv.world                          1/1     Running   0             107s   10.0.0.31         dlp-1.srv.world    <none>           <none>
kube-system            kube-controller-manager-dlp-1.srv.world                 1/1     Running   0             107s   10.0.0.31         dlp-1.srv.world    <none>           <none>
kube-system            kube-proxy-hbsdf                                        1/1     Running   0             109s   10.0.0.31         dlp-1.srv.world    <none>           <none>
kube-system            kube-scheduler-dlp-1.srv.world                          1/1     Running   0             107s   10.0.0.31         dlp-1.srv.world    <none>           <none>
関連コンテンツ