CentOS 7
Sponsored Link

Kubernetes : Configure Master Node2015/12/13

 
Install Kubeadm to Configure Multi Nodes Kubernetes Cluster.
This example is based on the emvironment like follows.
For System requirements, each Node has uniq Hostname, MAC address, Product_uuid.
MAC address and Product_uuid are generally already uniq one if you installed OS on phisical machine or virtual machine with common procedure.
You can see Product_uuid with the command [dmidecode -s system-uuid].
 -----------+---------------------------+--------------------------+------------
            |                           |                          |
        eth0|10.0.0.30              eth0|10.0.0.51             eth0|10.0.0.52
 +----------+-----------+   +-----------+----------+   +-----------+----------+
 |   [ dlp.srv.world ]  |   | [ node01.srv.world ] |   | [ node02.srv.world ] |
 |      Master Node     |   |      Worker Node     |   |      Worker Node     |
 +----------------------+   +----------------------+   +----------------------+

[1]
[2]
Configure initial setup on Master Node.
For [--apiserver-advertise-address] option, specify the IP address Kubernetes API server listens.
For [--pod-network-cidr] option, specify network which Pod Network uses.
There are some plugins for Pod Network. (refer to details below)
  ⇒ https://kubernetes.io/docs/concepts/cluster-administration/networking/
On this example, select Flannel. For Flannel, specify [--pod-network-cidr=10.244.0.0/16] to let Pod Network work normally.
[root@dlp ~]#
kubeadm init --apiserver-advertise-address=10.0.0.30 --pod-network-cidr=10.244.0.0/16

[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.30]
.....
.....
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

# the command below is necessary to run on Worker Node when he joins to the cluster, so remember it
kubeadm join 10.0.0.30:6443 --token 31eeyr.uxvpz3b0teajkaki \
        --discovery-token-ca-cert-hash sha256:f2caf4fe6f26dc6cc8188b55ee3c825e5b8d9779a61bfd4eda4b152627e56184

# set cluster admin user

# if you set common user as cluster admin, login with it and run [sudo cp/chown ***]

[root@dlp ~]#
mkdir -p $HOME/.kube

[root@dlp ~]#
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@dlp ~]#
chown $(id -u):$(id -g) $HOME/.kube/config
[3] Configure Pod Network with Flannel.
[root@dlp ~]#
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

master/Documentation/kube-flannel.yml /raw.githubusercontent.com/coreos/flannel/
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

# show state ⇒ OK if STATUS = Ready

[root@dlp ~]#
kubectl get nodes

NAME            STATUS   ROLES                  AGE    VERSION
dlp.srv.world   Ready    control-plane,master   2m4s   v1.21.1

# show state ⇒ OK if all are Running

[root@dlp ~]#
kubectl get pods --all-namespaces

NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-558bd4d5db-f8cvh                1/1     Running   0          117s
kube-system   coredns-558bd4d5db-tg8v6                1/1     Running   0          117s
kube-system   etcd-dlp.srv.world                      1/1     Running   0          2m6s
kube-system   kube-apiserver-dlp.srv.world            1/1     Running   0          2m6s
kube-system   kube-controller-manager-dlp.srv.world   1/1     Running   0          2m6s
kube-system   kube-flannel-ds-wdvpn                   1/1     Running   0          56s
kube-system   kube-proxy-wzh69                        1/1     Running   0          117s
kube-system   kube-scheduler-dlp.srv.world            1/1     Running   0          2m6s
Matched Content