Fedora 35
Sponsored Link

GlusterFS 9 : ノードを追加する2021/11/09

 
既存のクラスターにノードを追加する場合の設定です。
例として、リンク先の通り構築した分散構成のクラスターに [node03] を新規に追加します。
                                  |
+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
           ⇑                      |                      ⇑
     file1, file3 ...             |               file2, file4 ...
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   node03.srv.world   +----------+
|                      |
+----------------------+

[1]
こちらを参考に、新規ノードに GlusterFS サーバーをインストールして起動し、 既存ノードと同様のパスに GlusterFS ボリューム用のディレクトリを作成しておきます。
[2] 既存ノードのいずれかのノードで、新規ノード追加の設定をします。
# 新規ノードを探す

[root@node01 ~]#
gluster peer probe node03

peer probe: success.
# 状態を表示

[root@node01 ~]#
gluster peer status

Number of Peers: 2

Hostname: node02
Uuid: 447dedcb-fe9b-4743-851c-a7c2adef0043
State: Peer in Cluster (Connected)

Hostname: node03
Uuid: 663ac9bb-350f-4e21-ad9f-2a60fbf8bb45
State: Peer in Cluster (Connected)

# 既存のボリューム情報確認

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 3a671a01-2a6c-4c4d-858c-4c8e401bc23c
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.nl-cache-timeout: 600
performance.nl-cache: on
network.inode-lru-limit: 200000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
performance.cache-samba-metadata: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.write-behind: off
user.smb: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on

# 既存ボリュームに新規ノードを追加

[root@node01 ~]#
gluster volume add-brick vol_distributed node03:/glusterfs/distributed

volume add-brick: success
# ボリューム情報確認

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 3a671a01-2a6c-4c4d-858c-4c8e401bc23c
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Brick3: node03:/glusterfs/distributed
Options Reconfigured:
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.nl-cache-timeout: 600
performance.nl-cache: on
network.inode-lru-limit: 200000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
performance.cache-samba-metadata: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.write-behind: off
user.smb: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on

# 新規ノード追加後はボリュームのリバランスを実施

[root@node01 ~]#
gluster volume rebalance vol_distributed fix-layout start

volume rebalance: vol_distributed: success: Rebalance on vol_distributed has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 488f8c61-5899-4f52-974c-b78b69529638
関連コンテンツ