CentOS Stream 9
Sponsored Link

Kubernetes : Control Plane ノードの設定2023/10/19

 

マルチノード Kubernetes クラスターを構成します。

当例では以下のように 4 台のノードを使用して設定します。

+----------------------+   +----------------------+
|   [ mgr.srv.world ]  |   |   [ dlp.srv.world ]  |
|     Manager Node     |   |     Control Plane    |
+-----------+----------+   +-----------+----------+
        eth0|10.0.0.25             eth0|10.0.0.30
            |                          |
------------+--------------------------+-----------
            |                          |
        eth0|10.0.0.51             eth0|10.0.0.52
+-----------+----------+   +-----------+----------+
| [ node01.srv.world ] |   | [ node02.srv.world ] |
|     Worker Node#1    |   |     Worker Node#2    |
+----------------------+   +----------------------+

[1]

事前に全ノードに Kubeadm インストール等々、ノード共通の設定を適用しておきます

[2]

Control Plane ノードで初期セットアップします。

[control-plane-endpoint] には Control Plane ノードで共有する代表の IP アドレスを指定します。
当例のように、Manager ノードで Kubernetes クラスターをプロキシする場合は、Manager ノードの IP アドレスを指定します。

[apiserver-advertise-address] には Control Plane ノードの IP アドレスを指定します。

[--pod-network-cidr] には、Pod Network が利用するネットワークを指定します。
Pod Network を構成するためのプラグインはいくつかのソフトウェアから選択可能です。(詳細は下記リンク参照)

  ⇒ https://kubernetes.io/docs/concepts/cluster-administration/networking/

当例では Calico で進めます。
# Firewalld 稼働中の場合はサービス許可

[root@dlp ~]#
firewall-cmd --add-service={kube-apiserver,kube-control-plane,kube-control-plane-secure,kubelet,kubelet-readonly,http,https}

success
[root@dlp ~]#
firewall-cmd --runtime-to-permanent

success
[root@dlp ~]#
kubeadm init --control-plane-endpoint=10.0.0.25 --apiserver-advertise-address=10.0.0.30 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/crio/crio.sock

[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.30 10.0.0.25]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key

.....
.....

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.0.0.25:6443 --token yq90qu.cl827hpp06az8isd \
        --discovery-token-ca-cert-hash sha256:8a8bd725c9cbf8d03c0a724bded0afb923a067d48ca50fd8f0346fd3d0a27b6e \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.25:6443 --token yq90qu.cl827hpp06az8isd \
        --discovery-token-ca-cert-hash sha256:8a8bd725c9cbf8d03c0a724bded0afb923a067d48ca50fd8f0346fd3d0a27b6e

# 任意のユーザーでクラスター認証ファイルを Manager ノード に転送

[root@dlp ~]#
scp /etc/kubernetes/admin.conf centos@10.0.0.25:~/

cent@10.0.0.25's password:
admin.conf                                    100% 5645    20.7MB/s   00:00
[3] 以降は Manager ノードで作業します。Calico での Pod Network を構成しておきます。
# Control Plane から転送したファイルでクラスター管理ユーザーの設定
# 一般ユーザーを管理ユーザーとする場合は、該当ユーザー自身で sudo cp/chown ***

[root@mgr ~]#
mkdir -p $HOME/.kube

[root@mgr ~]#
cp /home/centos/admin.conf $HOME/.kube/config

[root@mgr ~]#
chown $(id -u):$(id -g) $HOME/.kube/config
[root@mgr ~]#
wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml

[root@mgr ~]#
kubectl apply -f calico.yaml

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

# 確認 : STATUS = Ready であれば OK

[root@mgr ~]#
kubectl get nodes

NAME            STATUS   ROLES           AGE     VERSION
dlp.srv.world   Ready    control-plane   7m52s   v1.28.2

# 確認 : 全て Running であれば OK

[root@mgr ~]#
kubectl get pods -A

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57758d645c-ws2gd   1/1     Running   0          50s
kube-system   calico-node-6cxp9                          1/1     Running   0          50s
kube-system   coredns-5dd5756b68-rn87g                   1/1     Running   0          8m18s
kube-system   coredns-5dd5756b68-zmsks                   1/1     Running   0          8m18s
kube-system   etcd-dlp.srv.world                         1/1     Running   0          8m32s
kube-system   kube-apiserver-dlp.srv.world               1/1     Running   0          8m32s
kube-system   kube-controller-manager-dlp.srv.world      1/1     Running   0          8m33s
kube-system   kube-proxy-ktlgl                           1/1     Running   0          8m18s
kube-system   kube-scheduler-dlp.srv.world               1/1     Running   0          8m32s
関連コンテンツ