GlusterFS 9 : ノードを追加する2021/07/09 |
既存のクラスターにノードを追加する場合の設定です。
例として、リンク先の通り構築した分散構成のクラスターに [node03] を新規に追加します。
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] | | node01.srv.world +----------+----------+ node02.srv.world | | | | | | +----------------------+ | +----------------------+ ⇑ | ⇑ file1, file3 ... | file2, file4 ... | +----------------------+ | | [GlusterFS Server#3] |10.0.0.53 | | node03.srv.world +----------+ | | +----------------------+ |
[1] |
こちらを参考に、新規ノードに GlusterFS サーバーをインストールして起動し、
既存ノードと同様のパスに GlusterFS ボリューム用のディレクトリを作成しておきます。
|
[2] | 既存ノードのいずれかのノードで、新規ノード追加の設定をします。 |
# 新規ノードを探す [root@node01 ~]# gluster peer probe node03 peer probe: success. # 状態を表示 [root@node01 ~]# gluster peer status Number of Peers: 2 Hostname: node02 Uuid: 5011d4c4-22d6-4a96-b7dc-b392c52adaf9 State: Peer in Cluster (Connected) Hostname: node03 Uuid: 915835cc-2549-41fc-a736-4694703e7ec4 State: Peer in Cluster (Connected) # 既存のボリューム情報確認 [root@node01 ~]# gluster volume info Volume Name: vol_distributed Type: Distribute Volume ID: 9f054037-2a35-4186-9782-b66b4f08757b Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node01:/glusterfs/distributed Brick2: node02:/glusterfs/distributed Brick3: node03:/glusterfs/distributed Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on cluster.enable-shared-storage: enable # 既存ボリュームに新規ノードを追加 [root@node01 ~]# gluster volume add-brick vol_distributed node03:/glusterfs/distributed volume add-brick: success # ボリューム情報確認 [root@node01 ~]# gluster volume info Volume Name: vol_distributed Type: Distribute Volume ID: 725fb28f-5678-4c2d-86b7-ab8c0165c323 Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node01:/glusterfs/distributed Brick2: node02:/glusterfs/distributed Brick3: node03:/glusterfs/distributed Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.parallel-readdir: on performance.readdir-ahead: on performance.nl-cache-timeout: 600 performance.nl-cache: on network.inode-lru-limit: 200000 performance.md-cache-timeout: 600 performance.cache-invalidation: on performance.stat-prefetch: on performance.cache-samba-metadata: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on performance.write-behind: off user.smb: enable storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on # 新規ノード追加後はボリュームのリバランスを実施 [root@node01 ~]# gluster volume rebalance vol_distributed fix-layout start volume rebalance: vol_distributed: success: Rebalance on vol_distributed has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 20445882-1eb0-4c80-b9ec-2edc66aa0e96 # [Status] が [completed] になればリバランス完了 [root@node01 ~]# gluster volume status Status of volume: vol_distributed Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node01:/glusterfs/distributed 49153 0 Y 54155 Brick node02:/glusterfs/distributed 49153 0 Y 35095 Brick node03:/glusterfs/distributed 49152 0 Y 3894 Quota Daemon on localhost N/A N/A Y 56939 Quota Daemon on node02 N/A N/A Y 35206 Quota Daemon on node03 N/A N/A Y 3911 Task Status of Volume vol_distributed ------------------------------------------------------------------------------ Task : Rebalance ID : 20445882-1eb0-4c80-b9ec-2edc66aa0e96 Status : completed |
Sponsored Link |
|