openSUSE Leap 16

Ceph Reef : Cephadm #2 クラスターの設定2025/10/28

 

Ceph クラスター デプロイ ツール [Cephadm] を使用した Ceph クラスターの新規構築です。

当例では 3 台 のノードでクラスターを構成します。
3 台 のノードにはそれぞれ空きブロックデバイスがあることが前提です。
(当例では [/dev/vdb] を使用)

                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]

リンク先の通り、[Cephadm] で [Monitor Daemon], [Manager Daemon] を設定した環境を前提に進めます。

[2] 他ノードをクラスターに追加するには以下のように実行します。
例として、[node02], [node03] を新規に追加します。
# 公開鍵を各対象ノードに転送

node01:~ #
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node02


Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node02'"
and check to make sure that only the key(s) you wanted were added.

node01:~ #
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node03


Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node03'"
and check to make sure that only the key(s) you wanted were added.

# 各対象ノードをクラスターに追加

node01:~ #
ceph orch host add node02.srv.world

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Added host 'node02.srv.world' with addr '10.0.0.52'

node01:~ #
ceph orch host add node03.srv.world

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Added host 'node03.srv.world' with addr '10.0.0.53'

node01:~ #
ceph orch host ls

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
HOST              ADDR       LABELS  STATUS
node01.srv.world  10.0.0.51  _admin
node02.srv.world  10.0.0.52
node03.srv.world  10.0.0.53
3 hosts in cluster
[3] OSD を設定するには以下のように実行します。
例として、[node01], [node02], [node03] を OSD に設定します。
# 利用可能なデバイスの一覧を表示
# [AVAIL = True] であれば利用可

node01:~ #
ceph orch device ls

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
HOST              PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REFRESHED  REJECT REASONS
node01.srv.world  /dev/vdb  hdd               160G  Yes        8m ago
node02.srv.world  /dev/vdb  hdd               160G  Yes        51s ago
node03.srv.world  /dev/vdb  hdd               160G  Yes        6s ago

# 各ノードに OSD を設定する

node01:~ #
ceph orch daemon add osd node01.srv.world:/dev/vdb

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:19 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Created osd(s) 1 on host 'node01.srv.world'

node01:~ #
ceph orch daemon add osd node02.srv.world:/dev/vdb

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Created osd(s) 1 on host 'node02.srv.world'

node01:~ #
ceph orch daemon add osd node03.srv.world:/dev/vdb

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Created osd(s) 2 on host 'node03.srv.world'

# 一定時間経過後 各サービスが起動し [HEALTH_OK] となる

node01:~ #
ceph -s

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
  cluster:
    id:     c67e76d2-b3b7-11f0-be4b-5254009c079b
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 4m)
    mgr: node01.zrnnru(active, since 11m), standbys: node02.ecjkpc
    osd: 3 osds: 3 up (since 15s), 3 in (since 31s)

  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   481 MiB used, 480 GiB / 480 GiB avail
    pgs:     1 active+clean
[4] OSD を削除するには以下のように実行します。
node01:~ #
ceph osd tree

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.46857  root default
-3         0.15619      host node01
 0    hdd  0.15619          osd.0        up   1.00000  1.00000
-5         0.15619      host node02
 1    hdd  0.15619          osd.1        up   1.00000  1.00000
-7         0.15619      host node03
 2    hdd  0.15619          osd.2        up   1.00000  1.00000

# 例として OSD ID [2] を削除

node01:~ #
ceph orch osd rm 2

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Scheduled OSD(s) for removal.
VG/LV for the OSDs won't be zapped (--zap wasn't passed).
Run the `ceph-volume lvm zap` command with `--destroy` against the VG/LV if you want them to be destroyed.

# 削除ステータス確認
# 削除ステータスが表示されなくなったら削除完了
# かなり時間がかかる

node01:~ #
ceph orch osd rm status

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
OSD  HOST              STATE     PGS  REPLACE  FORCE  ZAP    DRAIN STARTED AT
2    node03.srv.world  draining    1  False    False  False  2025-10-28 04:52:54.820080
関連コンテンツ