Debian 11 Bullseye
Sponsored Link

GlusterFS 9 : Remove Nodes (Bricks)2021/08/25

 
Remove Nodes (Bricks) from existing Cluster.
                                  |
+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
           ⇑                      |                      ⇑
     file1, file3 ...             |               file2, file4 ...
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   node03.srv.world   +----------+
|                      |
+----------------------+

[1] Remove a New Node from existing Cluster on a node. (OK on any existing node except removing target)
# confirm volume info

root@node01:~#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: f4f38809-53e7-4713-9a68-7ba8ea34e530
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Brick3: node03:/glusterfs/distributed
Options Reconfigured:
features.inode-quota: off
features.quota: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on

# start removing node from volume

# rebalance volume is also run

root@node01:~#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed start

It is recommended that remove-brick be run with cluster.force-migration option disabled to prevent possible data corruption. Doing so will ensure that files that receive writes during migration will not be migrated and will need to be manually copied after the remove-brick commit operation. Please check the value of the option and update accordingly.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: 4d3a8fa4-b3f2-402f-ba0a-b49915e10947

# confirm status

root@node01:~#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed status

     Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
   node03                0        0Bytes             0             0             0            completed        0:00:00

# after [status] turning to [completed], commit removing

root@node01:~#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed commit

volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

# confirm volume info

root@node01:~#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: f4f38809-53e7-4713-9a68-7ba8ea34e530
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
performance.client-io-threads: on
features.inode-quota: off
features.quota: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
Matched Content