CentOS Stream 10
Sponsored Link

Kubernetes : Control Plane ノードの設定2025/01/22

 

マルチノード Kubernetes クラスターを構成します。

当例では以下のように 4 台のノードを使用して設定します。

+----------------------+   +----------------------+
|  [ ctrl.srv.world ]  |   |   [ dlp.srv.world ]  |
|     Manager Node     |   |     Control Plane    |
+-----------+----------+   +-----------+----------+
        eth0|10.0.0.25             eth0|10.0.0.30
            |                          |
------------+--------------------------+-----------
            |                          |
        eth0|10.0.0.51             eth0|10.0.0.52
+-----------+----------+   +-----------+----------+
| [ node01.srv.world ] |   | [ node02.srv.world ] |
|     Worker Node#1    |   |     Worker Node#2    |
+----------------------+   +----------------------+

[1]

事前に全ノードに Kubeadm インストール等々、ノード共通の設定を適用しておきます

[2] Control Plane ノードで初期セットアップします。
[root@dlp ~]#
kubeadm config print init-defaults > config.yaml

[root@dlp ~]#
vi config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # Control Plane ノードの IP アドレスに変更
  advertiseAddress: 10.0.0.30
  bindPort: 6443
nodeRegistration:
  # CRI-O のソケットに変更
  criSocket: unix:///var/run/crio/crio.sock
  imagePullPolicy: IfNotPresent
  imagePullSerial: true
  # Control Plane ノードのホスト名に変更
  name: dlp.srv.world
  taints: null
timeouts:
  controlPlaneComponentHealthCheck: 4m0s
  discovery: 5m0s
  etcdAPICall: 2m0s
  kubeletHealthCheck: 4m0s
  kubernetesAPICall: 1m0s
  tlsBootstrap: 5m0s
  upgradeManifests: 5m0s
---
# 追記 : Control Plane ノードで共有する代表の IP アドレスを指定
# 当例のように Manager ノードで Kubernetes クラスターをプロキシする場合は
# Manager ノードの IP アドレスを指定する
controlPlaneEndpoint: "10.0.0.25:6443"
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.32.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  # Pod Network が利用するネットワークを追記
  # 以下は Calico デフォルト
  podSubnet: 192.168.0.0/16
proxy: {}
scheduler: {}
# 追記 : nftables 対応の kube-proxy を有効にする
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: nftables

[root@dlp ~]#
kubeadm init --config=config.yaml

[init] Using Kubernetes version: v1.32.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.30 10.0.0.25]

.....
.....

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.0.0.25:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ac6990d8007cb72c8c1ea1105ddffbb3d9905e425309e8dd5a14f367771fb7d7 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.25:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:ac6990d8007cb72c8c1ea1105ddffbb3d9905e425309e8dd5a14f367771fb7d7

# 任意のユーザーでクラスター認証ファイルを Manager ノード に転送

[root@dlp ~]#
scp /etc/kubernetes/admin.conf centos@10.0.0.25:/tmp

centos@10.0.0.25's password:
admin.conf                     100% 5649    10.4MB/s   00:00
[3] 以降は Manager ノードで作業します。Calico での Pod Network を構成しておきます。
# Control Plane から転送したファイルでクラスター管理ユーザーの設定
# 一般ユーザーを管理ユーザーとする場合は、該当ユーザー自身で sudo cp/chown ***

[root@ctrl ~]#
mkdir -p $HOME/.kube

[root@ctrl ~]#
mv /tmp/admin.conf $HOME/.kube/config

[root@ctrl ~]#
chown $(id -u):$(id -g) $HOME/.kube/config
[root@ctrl ~]#
wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/operator-crds.yaml

[root@ctrl ~]#
wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/tigera-operator.yaml

[root@ctrl ~]#
kubectl apply -f operator-crds.yaml

[root@ctrl ~]#
kubectl apply -f tigera-operator.yaml

[root@ctrl ~]#
cat > custom-resources.yaml <<EOF

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    linuxDataplane: Nftables
    ipPools:
    - name: default-ipv4-ippool
      blockSize: 26
      cidr: 192.168.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}
EOF

[root@ctrl ~]#
kubectl apply -f custom-resources.yaml

installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

# show state : OK if STATUS = Ready

[root@ctrl ~]#
kubectl get nodes

NAME            STATUS   ROLES           AGE     VERSION
dlp.srv.world   Ready    control-plane   2m59s   v1.32.6

# show state : OK if all are Running

[root@ctrl ~]#
kubectl get pods -A

NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-57dbfc8b44-577w5          1/1     Running   0          78s
calico-apiserver   calico-apiserver-57dbfc8b44-lwz2r          1/1     Running   0          78s
calico-system      calico-kube-controllers-6696465fdc-bjqxm   1/1     Running   0          76s
calico-system      calico-node-krbhk                          1/1     Running   0          76s
calico-system      calico-typha-7bf9667c4d-wv58p              1/1     Running   0          77s
calico-system      csi-node-driver-4242x                      2/2     Running   0          76s
kube-system        coredns-668d6bf9bc-66txn                   1/1     Running   0          3m45s
kube-system        coredns-668d6bf9bc-ph45c                   1/1     Running   0          3m45s
kube-system        etcd-dlp.srv.world                         1/1     Running   1          3m51s
kube-system        kube-apiserver-dlp.srv.world               1/1     Running   1          3m52s
kube-system        kube-controller-manager-dlp.srv.world      1/1     Running   1          3m51s
kube-system        kube-proxy-wb2jd                           1/1     Running   0          3m45s
kube-system        kube-scheduler-dlp.srv.world               1/1     Running   1          3m51s
tigera-operator    tigera-operator-747864d56d-x74hn           1/1     Running   0          2m9s
関連コンテンツ