Kubernetes : Control Plane ノードの設定2025/11/07 |
|
マルチノード Kubernetes クラスターを構成します。 当例では以下のように 4 台のノードを使用して設定します。
+----------------------+ +----------------------+
| [ ctrl.srv.world ] | | [ dlp.srv.world ] |
| Manager Node | | Control Plane |
+-----------+----------+ +-----------+----------+
eth0|10.0.0.25 eth0|10.0.0.30
| |
------------+--------------------------+-----------
| |
eth0|10.0.0.51 eth0|10.0.0.52
+-----------+----------+ +-----------+----------+
| [ node01.srv.world ] | | [ node02.srv.world ] |
| Worker Node#1 | | Worker Node#2 |
+----------------------+ +----------------------+
|
| [1] | |
| [2] | Control Plane ノードで初期セットアップします。 |
|
dlp:~ # kubeadm config print init-defaults > config.yaml
dlp:~ #
vi config.yaml apiVersion: kubeadm.k8s.io/v1beta4 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: # Control Plane ノードの IP アドレスに変更 advertiseAddress: 10.0.0.30 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent imagePullSerial: true # Control Plane ノードのホスト名に変更 name: dlp.srv.world taints: null timeouts: controlPlaneComponentHealthCheck: 4m0s discovery: 5m0s etcdAPICall: 2m0s kubeletHealthCheck: 4m0s kubernetesAPICall: 1m0s tlsBootstrap: 5m0s upgradeManifests: 5m0s --- # 追記 : Control Plane ノードで共有する代表の IP アドレスを指定 # 当例のように Manager ノードで Kubernetes クラスターをプロキシする場合は # Manager ノードの IP アドレスを指定する controlPlaneEndpoint: "10.0.0.25:6443" apiServer: {} apiVersion: kubeadm.k8s.io/v1beta4 caCertificateValidityPeriod: 87600h0m0s certificateValidityPeriod: 8760h0m0s certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} encryptionAlgorithm: RSA-2048 etcd: local: dataDir: /var/lib/etcd imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: 1.34.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 # Pod Network が利用するネットワークを追記 # 以下は Calico デフォルト podSubnet: 192.168.0.0/16 proxy: {} scheduler: {} # 追記 : nftables 対応の kube-proxy を有効にする --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: nftablesdlp:~ # kubeadm init --config=config.yaml
[init] Using Kubernetes version: v1.34.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.30 10.0.0.25]
[certs] Generating "apiserver-kubelet-client" certificate and key
.....
.....
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.0.0.25:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:fbbf173149bc275b687e3a772b81d2e127cbbcd49a5546902e1a99f7e37d4729 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.25:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:fbbf173149bc275b687e3a772b81d2e127cbbcd49a5546902e1a99f7e37d4729
# 任意のユーザーでクラスター認証ファイルを Manager ノード に転送 dlp:~ # scp /etc/kubernetes/admin.conf suse@10.0.0.25:/tmp (suse@10.0.0.25) Password: admin.conf 100% 5633 3.9MB/s 00:00 |
| [3] | 以降は Manager ノードで作業します。Calico での Pod Network を構成しておきます。 |
|
# Control Plane から転送したファイルでクラスター管理ユーザーの設定 # 一般ユーザーを管理ユーザーとする場合は、該当ユーザー自身で sudo cp/chown *** ctrl:~ # mkdir -p $HOME/.kube ctrl:~ # mv /tmp/admin.conf $HOME/.kube/config ctrl:~ # chown $(id -u):$(id -g) $HOME/.kube/config
wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/operator-crds.yaml ctrl:~ # wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/tigera-operator.yaml ctrl:~ # kubectl apply -f operator-crds.yaml ctrl:~ # kubectl apply -f tigera-operator.yaml ctrl:~ # cat > custom-resources.yaml <<EOF
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
linuxDataplane: Nftables
ipPools:
- name: default-ipv4-ippool
blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
EOF
kubectl apply -f custom-resources.yaml installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created # show state : OK if STATUS = Ready ctrl:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION dlp.srv.world Ready control-plane 5m36s v1.34.1 # show state : OK if all are Running ctrl:~ # kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE calico-apiserver calico-apiserver-74884d5955-29pc4 1/1 Running 0 2m29s calico-apiserver calico-apiserver-74884d5955-jt4p6 1/1 Running 0 2m29s calico-system calico-kube-controllers-855c4bcfd4-4f8lp 1/1 Running 0 2m27s calico-system calico-node-q66z2 1/1 Running 0 2m27s calico-system calico-typha-74974cccd8-6smq9 1/1 Running 0 2m27s calico-system csi-node-driver-9vfmr 2/2 Running 0 2m27s kube-system coredns-66bc5c9577-lphm9 1/1 Running 0 5m47s kube-system coredns-66bc5c9577-lxrgm 1/1 Running 0 5m47s kube-system etcd-dlp.srv.world 1/1 Running 0 5m54s kube-system kube-apiserver-dlp.srv.world 1/1 Running 0 5m54s kube-system kube-controller-manager-dlp.srv.world 1/1 Running 0 5m54s kube-system kube-proxy-rs8vl 1/1 Running 0 5m48s kube-system kube-scheduler-dlp.srv.world 1/1 Running 0 5m54s tigera-operator tigera-operator-7f579c97c6-wt5j6 1/1 Running 0 2m47s |
| Sponsored Link |
|
|