GlusterFS 7 : Distributed + Replication2021/04/02 |
Configure Storage Clustering with GlusterFS.
For example, Create a Distributed + Replication volume with 6 Nodes.
Also Configure Arbiter Volume in order to avoid Split Brain.
It is strongly recommended to use partitions for GlusterFS volumes that are different from the / partition.
On this example, it shows settings on the environment that all nodes has [sdb1] and mount it to [/glusterfs]. | +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.54| [GlusterFS Server#4] | | node01.srv.world +----------+----------+ node04.srv.world | | | | | | +----------------------+ | +----------------------+ | +----------------------+ | +----------------------+ | [GlusterFS Server#2] |10.0.0.52 | 10.0.0.55| [GlusterFS Server#5] | | node02.srv.world +----------+----------+ node05.srv.world | | | | | | +----------------------+ | +----------------------+ | +----------------------+ | +----------------------+ | [GlusterFS Server#3] |10.0.0.53 | 10.0.0.56| [GlusterFS Server#6] | | node03.srv.world +----------+----------+ node06.srv.world | | | | | +----------------------+ +----------------------+ ⇑ ⇑ file1, file3 ... file2, file4 ... |
It is strongly recommended to use partitions for GlusterFS volumes that are different from the / partition.
On this example, it shows settings on the environment that all nodes has [sdb1] and mount it to [/glusterfs]. |
|
[1] | |
[2] | Create a Directory for GlusterFS Volume on all Nodes. |
[root@node01 ~]# mkdir -p /glusterfs/dist-replica |
[3] | Configure Clustering like follows on a node. (it's OK on any node) |
# probe the nodes [root@node01 ~]# gluster peer probe node02 peer probe: success. [root@node01 ~]# gluster peer probe node03 peer probe: success. [root@node01 ~]# gluster peer probe node04 peer probe: success. [root@node01 ~]# gluster peer probe node05 peer probe: success. [root@node01 ~]# gluster peer probe node06 peer probe: success. # confirm status [root@node01 ~]# gluster peer status Number of Peers: 5 Hostname: node02 Uuid: 7f2113e9-b709-4970-bb8d-4b410454f287 State: Peer in Cluster (Connected) Hostname: node03 Uuid: 76e27164-dd4f-4bc0-987f-934db2cf625c State: Peer in Cluster (Connected) Hostname: node04 Uuid: 1fd70d45-49ee-44ed-a5f7-17730cb163de State: Peer in Cluster (Connected) Hostname: node05 Uuid: 17fb1119-61e3-41d7-a2e2-2983b5132d46 State: Peer in Cluster (Connected) Hostname: node06 Uuid: 79969299-134b-4390-b680-0b4a6da8c5b7 State: Peer in Cluster (Connected) # create volume [root@node01 ~]# gluster volume create vol_dist-replica replica 3 arbiter 1 transport tcp \ node01:/glusterfs/dist-replica \ node02:/glusterfs/dist-replica \ node03:/glusterfs/dist-replica \ node04:/glusterfs/dist-replica \ node05:/glusterfs/dist-replica \ node06:/glusterfs/dist-replica volume create: vol_dist-replica: success: please start the volume to access data # start volume [root@node01 ~]# gluster volume start vol_dist-replica volume start: vol_dist-replica: success # confirm volume info [root@node01 ~]# gluster volume info Volume Name: vol_dist-replica Type: Distributed-Replicate Volume ID: 53bb232b-49d0-4bee-8ebd-e95443e04eee Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: node01:/glusterfs/dist-replica Brick2: node02:/glusterfs/dist-replica Brick3: node03:/glusterfs/dist-replica (arbiter) Brick4: node04:/glusterfs/dist-replica Brick5: node05:/glusterfs/dist-replica Brick6: node06:/glusterfs/dist-replica (arbiter) Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off |
[4] |
Sponsored Link |
|