Kubernetes : Control Plane नोड कॉन्फ़िगर करें2024/06/07 |
मल्टी नोड्स Kubernetes क्लस्टर कॉन्फ़िगर करें. यह उदाहरण निम्न प्रकार के वातावरण पर आधारित है। +----------------------+ +----------------------+ | [ ctrl.srv.world ] | | [ dlp.srv.world ] | | Manager Node | | Control Plane | +-----------+----------+ +-----------+----------+ eth0|10.0.0.25 eth0|10.0.0.30 | | ------------+--------------------------+----------- | | eth0|10.0.0.51 eth0|10.0.0.52 +-----------+----------+ +-----------+----------+ | [ node01.srv.world ] | | [ node02.srv.world ] | | Worker Node#1 | | Worker Node#2 | +----------------------+ +----------------------+ |
[1] |
सभी नोड्स पर पूर्व-आवश्यकताओं को कॉन्फ़िगर करें, यहां देखें। |
[2] |
Control Plane नोड पर प्रारंभिक सेटअप कॉन्फ़िगर करें.
[control-plane-endpoint] के लिए, होस्टनाम या IP पता निर्दिष्ट करें जो Kubernetes क्लस्टर के बीच साझा किया जाता है। [apiserver-advertise-address] के लिए, Control Plane Node IP पता निर्दिष्ट करें।
[--pod-network-cidr] विकल्प के लिए, वह नेटवर्क निर्दिष्ट करें जिसका उपयोग पॉड नेटवर्क करता है। ⇒ https://kubernetes.io/docs/concepts/cluster-administration/networking/ इस उदाहरण में, यह कैलिको का चयन करता है। |
root@dlp:~# kubeadm init --control-plane-endpoint=10.0.0.25 --apiserver-advertise-address=10.0.0.30 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///run/containerd/containerd.sock [init] Using Kubernetes version: v1.30.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [dlp.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.30 10.0.0.25] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key ..... ..... Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 10.0.0.25:6443 --token 33l1jx.75weydwhz6elwdqw \ --discovery-token-ca-cert-hash sha256:fe7bab5af66756eb9516e4c51c072af4839509fa0d727a124ce8a9cbaafaf829 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.25:6443 --token 33l1jx.75weydwhz6elwdqw \ --discovery-token-ca-cert-hash sha256:fe7bab5af66756eb9516e4c51c072af4839509fa0d727a124ce8a9cbaafaf829 # किसी भी उपयोगकर्ता के साथ क्लस्टर व्यवस्थापक के लिए प्रमाणीकरण फ़ाइल को प्रबंधक नोड में स्थानांतरित करें root@dlp:~# scp /etc/kubernetes/admin.conf ubuntu@10.0.0.25:/tmp ubuntu@10.0.0.25's password: admin.conf 100% 5649 8.1MB/s 00:00 |
[3] | मैनेजर नोड पर काम करें। Calico के साथ पॉड नेटवर्क कॉन्फ़िगर करें। |
# Control Plane से स्थानांतरित की गई फ़ाइल के साथ क्लस्टर व्यवस्थापक उपयोगकर्ता सेट करें # यदि आपने सामान्य उपयोगकर्ता को क्लस्टर व्यवस्थापक के रूप में सेट किया है, तो उसके साथ लॉगिन करें और [sudo cp/chown ***] चलाएँ root@ctrl:~# mkdir -p $HOME/.kube root@ctrl:~# mv /tmp/admin.conf $HOME/.kube/config root@ctrl:~# chown $(id -u):$(id -g) $HOME/.kube/config
wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml root@ctrl:~# kubectl apply -f calico.yaml poddisruptionbudget.policy/calico-kube-controllers created serviceaccount/calico-kube-controllers created serviceaccount/calico-node created serviceaccount/calico-cni-plugin created configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created daemonset.apps/calico-node created deployment.apps/calico-kube-controllers created # स्थिति दिखाएँ : ठीक है यदि स्थिति = तैयार root@ctrl:~# kubectl get nodes NAME STATUS ROLES AGE VERSION dlp.srv.world Ready control-plane 100s v1.30.1 # स्थिति दिखाएँ : ठीक है यदि सभी चल रहे हैं root@ctrl:~# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-57cf4498-dgb7z 1/1 Running 0 42s kube-system calico-node-wcwr9 1/1 Running 0 42s kube-system coredns-7db6d8ff4d-gzmf2 1/1 Running 0 2m43s kube-system coredns-7db6d8ff4d-vs9dm 1/1 Running 0 2m43s kube-system etcd-dlp.srv.world 1/1 Running 0 3m kube-system kube-apiserver-dlp.srv.world 1/1 Running 0 3m kube-system kube-controller-manager-dlp.srv.world 1/1 Running 0 2m58s kube-system kube-proxy-cjjwz 1/1 Running 0 2m44s kube-system kube-scheduler-dlp.srv.world 1/1 Running 0 2m58s |
Sponsored Link |
|