GlusterFS 6 : Add Nodes (Bricks)2019/10/08 |
Add Nodes (Bricks) to existing Cluster.
For example, Add a Node [node03] to the existing Cluster like follows.
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] | | node01.srv.world +----------+----------+ node02.srv.world | | | | | | +----------------------+ | +----------------------+ ⇑ | ⇑ file1, file3 ... | file2, file4 ... | +----------------------+ | | [GlusterFS Server#3] |10.0.0.53 | | node03.srv.world +----------+ | | +----------------------+ |
[1] |
Install GlusterFS to a New Node, refer to here, and then
Create a directory for GlusterFS volume on the same Path with other Nodes.
|
[2] | Add a New Node to existing Cluster on a node. (OK on any existing node) |
# probe new node [root@node01 ~]# gluster peer probe node03 peer probe: success. # confirm status [root@node01 ~]# gluster peer status Number of Peers: 2 Hostname: node02 Uuid: d438d612-77f3-4802-9978-336aa722f796 State: Peer in Cluster (Connected) Hostname: node03 Uuid: 1f24dd87-c20e-4e14-b6e1-36a7f8b0f360 State: Peer in Cluster (Connected) # confirm existing volume [root@node01 ~]# gluster volume info Volume Name: vol_distributed Type: Distribute Volume ID: 88f71e9f-d509-44c2-a62f-9524bba93fe5 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node01:/glusterfs/distributed Brick2: node02:/glusterfs/distributed Options Reconfigured: transport.address-family: inet nfs.disable: on # add new node [root@node01 ~]# gluster volume add-brick vol_distributed node03:/glusterfs/distributed volume add-brick: success # confirm volume info [root@node01 ~]# gluster volume info Volume Name: vol_distributed Type: Distribute Volume ID: 88f71e9f-d509-44c2-a62f-9524bba93fe5 Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node01:/glusterfs/distributed Brick2: node02:/glusterfs/distributed Brick3: node03:/glusterfs/distributed Options Reconfigured: transport.address-family: inet nfs.disable: on # after adding new node, run rebalance volume [root@node01 ~]# gluster volume rebalance vol_distributed fix-layout start volume rebalance: vol_distributed: success: Rebalance on vol_distributed has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 6290db2e-7f55-49af-9c54-34f70f41747a |
Sponsored Link |
|