Fedora 31
Sponsored Link

GlusterFS 7 : Remove Nodes (Bricks)
2019/11/18
 
Remove Nodes (Bricks) from existing Cluster.
For example, Remove a Node [node03] from the existing Cluster like follows.
                                  |
+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
           ⇑                      |                      ⇑
     file1, file3 ...             |               file2, file4 ...
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   node03.srv.world   +----------+
|                      |
+----------------------+

[1] Remove a New Node from existing Cluster on a node. (OK on any existing node except removing target)
# confirm volume info

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 15c21e2b-5603-4bd6-8ca6-93aa838995bf
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Brick3: node03:/glusterfs/distributed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

# start removing node from volume

# rebalance volume is also run

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed start

Running remove-brick with cluster.force-migration enabled can result in data corruption. 
It is safer to disable this option so that files that receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: 095b38a9-d72c-4ea3-a12c-2cb94097b015

# confirm status

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed status


     Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
   node03                0        0Bytes             0             0             0            completed        0:00:00

# after [status] turning to [completed], commit removing

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed commit

volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

# confirm volume info

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 15c21e2b-5603-4bd6-8ca6-93aa838995bf
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
Matched Content