Docker : Swarm Cluster2022/07/29 |
Configure Docker Swarm to create Docker Cluster with multiple Docker nodes.
On this example, Configure Swarm Cluster with 3 Docker nodes like follows.
There are 2 roles on Swarm Cluster, those are [Manager nodes] and [Worker nodes]. This example shows to set those roles like follows. -----------+---------------------------+--------------------------+------------ | | | eth0|10.0.0.51 eth0|10.0.0.52 eth0|10.0.0.53 +----------+-----------+ +-----------+----------+ +-----------+----------+ | [ node01.srv.world ] | | [ node02.srv.world ] | | [ node03.srv.world ] | | Manager | | Worker | | Worker | +----------------------+ +----------------------+ +----------------------+ |
[1] | |
[2] | Change settings for Swarm mode on all nodes. |
[root@node01 ~]#
vi /etc/docker/daemon.json # create new # disable live-restore feature (impossible to use it on Swarm mode) { "live-restore": false }
[root@node01 ~]#
systemctl restart docker
# if Firewalld is running, allow ports [root@node01 ~]# firewall-cmd --add-port={2377/tcp,7946/tcp,7946/udp,4789/udp} success [root@node01 ~]# firewall-cmd --runtime-to-permanent success |
[3] | Configure Swarm Cluster on Manager Node. |
[root@node01 ~]# docker swarm init Swarm initialized: current node (junjnqbtrqeeowwgvisrv4gxl) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-02snorr1uhq032czkhvzoy5owhxrklgllje3ez3uy5odyow4co-0e3yfhee2na3lcutmjff1l2uj 10.0.0.51:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. |
[4] | Join in Swarm Cluster on all Worker Nodes. It's OK to run the command which was shown when running swarm init on Manager Node. |
[root@node02 ~]# docker swarm join \ --token SWMTKN-1-02snorr1uhq032czkhvzoy5owhxrklgllje3ez3uy5odyow4co-0e3yfhee2na3lcutmjff1l2uj 10.0.0.51:2377 This node joined a swarm as a worker. |
[5] | Verify with a command [node ls] that worker nodes could join in Cluster normally. |
[root@node01 ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION junjnqbtrqeeowwgvisrv4gxl * node01.srv.world Ready Active Leader 20.10.17 4s22ryuu72jzrkl2ye9318g6a node02.srv.world Ready Active 20.10.17 xbty6keny29dt12f835rs45mh node03.srv.world Ready Active 20.10.17 |
[6] | Verify Cluster works normally to create a test service. For example, create a web service containers and configure Swarm service. Generally, it is used a container image on a rgistry on all Nodes, but on this example, create container images on each Node to verify settings and accesses for Swarm Cluster. |
[root@node01 ~]#
vi Dockerfile FROM quay.io/centos/centos:stream9 MAINTAINER ServerWorld <admin@srv.world> RUN dnf -y install nginx RUN echo "Nginx on node01" > /usr/share/nginx/html/index.html EXPOSE 80 CMD ["/usr/sbin/nginx", "-g", "daemon off;"] docker build -t nginx-server:latest . |
[7] | Configure service on Manager Node. After succeeding to configure service, access to the Manager node's Hostname or IP address to verify it works normally. Access requests to worker nodes are load-balanced with round-robin like follows. |
[root@node01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx-server latest 36ee54c931d0 49 seconds ago 251MB quay.io/centos/centos stream9 61674c24ebbf 34 hours ago 152MB # create a service with 2 replicas [root@node01 ~]# docker service create --name swarm_cluster --replicas=2 -p 80:80 nginx-server:latest yey0i2qb8lntp1h1l3er1jt2i overall progress: 0 out of 2 tasks 1/2: preparing 2/2: ready ..... ..... # show service list [root@node01 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS yey0i2qb8lnt swarm_cluster replicated 2/2 nginx-server:latest *:80->80/tcp # inspect the service [root@node01 ~]# docker service inspect swarm_cluster --pretty ID: yey0i2qb8lntp1h1l3er1jt2i Name: swarm_cluster Service Mode: Replicated Replicas: 2 Placement: UpdateConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Update order: stop-first RollbackConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Rollback order: stop-first ContainerSpec: Image: nginx-server:latest Init: false Resources: Endpoint Mode: vip Ports: PublishedPort = 80 Protocol = tcp TargetPort = 80 PublishMode = ingress # show service state [root@node01 ~]# docker service ps swarm_cluster ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS ld81qfotowev swarm_cluster.1 nginx-server:latest node02.srv.world Running Running about a minute ago szaokrbdcptt swarm_cluster.2 nginx-server:latest node01.srv.world Running Running about a minute ago # verify it works normally [root@node01 ~]# curl node01.srv.world Nginx on node02 [root@node01 ~]# curl node01.srv.world Nginx on node01 [root@node01 ~]# curl node01.srv.world Nginx on node02 [root@node01 ~]# curl node01.srv.world Nginx on node01 |
[8] | If you'd like to change the number of replicas, configure like follows. |
# change replicas to 3 [root@node01 ~]# docker service scale swarm_cluster=3 swarm_cluster scaled to 3 overall progress: 2 out of 3 tasks 1/3: running 2/3: running 3/3: preparing ..... .....[root@node01 ~]# docker service ps swarm_cluster ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS ld81qfotowev swarm_cluster.1 nginx-server:latest node02.srv.world Running Running 2 minutes ago szaokrbdcptt swarm_cluster.2 nginx-server:latest node01.srv.world Running Running 2 minutes ago k99ekindz2g8 swarm_cluster.3 nginx-server:latest node03.srv.world Running Running 17 seconds ago # verify accesses [root@node01 ~]# curl node01.srv.world Nginx on node01 [root@node01 ~]# curl node01.srv.world Nginx on node03 [root@node01 ~]# curl node01.srv.world Nginx on node02 |
Sponsored Link |
|