CentOS Stream 8
Sponsored Link

GlusterFS 6 : Remove Nodes (Bricks)2021/03/22

 
Remove Nodes (Bricks) from existing Cluster.
                                  |
+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
           ⇑                      |                      ⇑
     file1, file3 ...             |               file2, file4 ...
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   node03.srv.world   +----------+
|                      |
+----------------------+

[1] Remove a New Node from existing Cluster on a node. (OK on any existing node except removing target)
# confirm volume info

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 8aacffe1-82f7-4ac1-a364-0f4c0fce24bf
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Brick3: node03:/glusterfs/distributed
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on

# start removing node from volume

# rebalance volume is also run

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed start

Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: bf76e864-986a-4e1a-96dd-88f079854c45

# confirm status

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed status

     Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
   node03                2       34Bytes             2             0             0            completed        0:00:00

# after [status] turning to [completed], commit removing

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed commit

volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

# confirm volume info

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 8aacffe1-82f7-4ac1-a364-0f4c0fce24bf
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
performance.client-io-threads: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
Matched Content