Fedora 33
Sponsored Link

GlusterFS 8 : Replication Configuration
2020/11/03
 
Configure Storage Clustering with GlusterFS.
For example, Create a Replication volume with 3 Nodes.
It's possible to create Replication volume with 2 Nodes but it's not recommended because split-brain syndrome maybe happens on [replica 2 volume]. As a countermeasure for split-brain, it is recommended to create volume with more than 3 Nodes or set [arbiter volume].
                                  |
+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   node03.srv.world   +----------+
|                      |
+----------------------+

 
It is strongly recommended to use partitions for GlusterFS volumes that are different from the / partition.
On this example, it shows settings on the environment that all nodes has [sdb1] and mount it to [/glusterfs].
[1]
[2] Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]#
mkdir -p /glusterfs/replica

[3] Configure Clustering like follows on a node. (it's OK on any node)
# probe the node

[root@node01 ~]#
gluster peer probe node02

peer probe: success.
[root@node01 ~]#
gluster peer probe node03

peer probe: success.
# confirm status

[root@node01 ~]#
gluster peer status

Number of Peers: 2

Hostname: node02
Uuid: a49b0a30-296c-42df-9a58-15132a2c0c01
State: Peer in Cluster (Connected)

Hostname: node03
Uuid: 21bf6fbb-aaf8-410d-bf2b-3bb9c8603809
State: Peer in Cluster (Connected)

# create volume

[root@node01 ~]#
gluster volume create vol_replica replica 3 transport tcp \
node01:/glusterfs/replica \
node02:/glusterfs/replica \
node03:/glusterfs/replica

volume create: vol_replica: success: please start the volume to access data
# start volume

[root@node01 ~]#
gluster volume start vol_replica

volume start: vol_replica: success
# confirm volume info

[root@node01 ~]#
gluster volume info


Volume Name: vol_replica
Type: Replicate
Volume ID: 2618f145-5be0-4298-bd0a-c2583bb0d3ec
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/replica
Brick2: node02:/glusterfs/replica
Brick3: node03:/glusterfs/replica
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[4]
Matched Content