openSUSE Leap 16

Ceph Reef : クラスターの設定 #22025/10/28

 

分散ファイルシステム Ceph をインストールして、ストレージクラスターを構成します。

当例では 3 台 のノードでクラスターを構成します。
3 台 のノードにはそれぞれ空きブロックデバイスがあることが前提です。
(当例では [/dev/sdb] を使用)

                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]

事前に [Monitor Daemon], [Manager Daemon] の設定をしていることが前提です

[2] 管理ノードから各ノードへ OSD (Object Storage Device) の設定をします。
設定するブロックデバイス (当例では [/dev/sdb]) はフォーマットするため、保存が必要な既存データがある場合は事前にバックアップが必要です。
# Firewalld 稼働中の場合は事前に必要なサービスポートを許可

node01:~ # for NODE in node01 node02 node03
do
    ssh $NODE "firewall-cmd --add-service=ceph; firewall-cmd --runtime-to-permanent"
done 

# 各ノードに OSD の設定を実行

node01:~ # for NODE in node01 node02 node03
do
    if [ ! ${NODE} = "node01" ]
    then
        scp /etc/ceph/ceph.conf ${NODE}:/etc/ceph/ceph.conf
        scp /etc/ceph/ceph.client.admin.keyring ${NODE}:/etc/ceph
        scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${NODE}:/var/lib/ceph/bootstrap-osd
    fi
    ssh $NODE \
    "chown -R ceph:ceph /etc/ceph/ceph.* /var/lib/ceph; \
    parted --script /dev/sdb 'mklabel gpt'; \
    parted --script /dev/sdb "mkpart primary 0% 100%"; \
    ceph-volume lvm create --data /dev/sdb1"
done 

Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new caee1a4a-9d2a-4086-8904-9c9f68a967c8
Running command: vgcreate --force --yes ceph-64dcf87b-8dd2-4586-89bf-b9ab1ab18004 /dev/sdb1
 stdout: Physical volume "/dev/sdb1" successfully created.
 stdout: Creating devices file /etc/lvm/devices/system.devices
 stdout: Volume group "ceph-64dcf87b-8dd2-4586-89bf-b9ab1ab18004" successfully created
Running command: lvcreate --yes -l 40959 -n osd-block-caee1a4a-9d2a-4086-8904-9c9f68a967c8 ceph-64dcf87b-8dd2-4586-89bf-b9ab1ab18004
 stdout: Logical volume "osd-block-caee1a4a-9d2a-4086-8904-9c9f68a967c8" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Running command: /sbin/restorecon /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-64dcf87b-8dd2-4586-89bf-b9ab1ab18004/osd-block-caee1a4a-9d2a-4086-8904-9c9f68a967c8
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Running command: /usr/bin/ln -s /dev/ceph-64dcf87b-8dd2-4586-89bf-b9ab1ab18004/osd-block-caee1a4a-9d2a-4086-8904-9c9f68a967c8 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 2
--> Creating keyring file for osd.0
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid caee1a4a-9d2a-4086-8904-9c9f68a967c8 --setuser ceph --setgroup ceph
 stderr: 2025-10-28T09:14:42.617+0900 7fb51c280600 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
 stderr: 2025-10-28T09:14:42.617+0900 7fb51c280600 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
 stderr: 2025-10-28T09:14:42.617+0900 7fb51c280600 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
 stderr: 2025-10-28T09:14:42.617+0900 7fb51c280600 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: /dev/sdb1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-64dcf87b-8dd2-4586-89bf-b9ab1ab18004/osd-block-caee1a4a-9d2a-4086-8904-9c9f68a967c8 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-64dcf87b-8dd2-4586-89bf-b9ab1ab18004/osd-block-caee1a4a-9d2a-4086-8904-9c9f68a967c8 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-caee1a4a-9d2a-4086-8904-9c9f68a967c8
 stderr: Created symlink '/etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-caee1a4a-9d2a-4086-8904-9c9f68a967c8.service' → '/usr/lib/systemd/system/ceph-volume@.service'.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
 stderr: Created symlink '/run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service' → '/usr/lib/systemd/system/ceph-osd@.service'.
Running command: /usr/bin/systemctl start ceph-osd@0
--> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm create successful for: /dev/sdb1
.....
.....

# ステータス確認
# [HEALTH_OK] であれば OK

node01:~ #
ceph -s

  cluster:
    id:     a1598936-342f-40c2-babf-f4b61a9e0bf2
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum node01 (age 7m)
    mgr: node01(active, since 6m)
    osd: 3 osds: 3 up (since 89s), 3 in (since 99s)

  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   81 MiB used, 480 GiB / 480 GiB avail
    pgs:     1 active+clean

# OSD ツリー確認

node01:~ #
ceph osd tree

ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.46857  root default
-3         0.15619      host node01
 0    hdd  0.15619          osd.0        up   1.00000  1.00000
-5         0.15619      host node02
 1    hdd  0.15619          osd.1        up   1.00000  1.00000
-7         0.15619      host node03
 2    hdd  0.15619          osd.2        up   1.00000  1.00000

node01:~ # ceph df 
--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    480 GiB  480 GiB  81 MiB    81 MiB       0.02
TOTAL  480 GiB  480 GiB  81 MiB    81 MiB       0.02

--- POOLS ---
POOL  ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr   1    1  449 KiB        2  1.3 MiB      0    152 GiB

node01:~ # ceph osd df 
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META    AVAIL    %USE  VAR   PGS  STATUS
 0    hdd  0.15619   1.00000  160 GiB   27 MiB  588 KiB    1 KiB  26 MiB  160 GiB  0.02  1.00    1      up
 1    hdd  0.15619   1.00000  160 GiB   27 MiB  588 KiB    1 KiB  26 MiB  160 GiB  0.02  1.00    1      up
 2    hdd  0.15619   1.00000  160 GiB   27 MiB  584 KiB    1 KiB  26 MiB  160 GiB  0.02  1.00    1      up
                       TOTAL  480 GiB   81 MiB  1.7 MiB  4.7 KiB  79 MiB  480 GiB  0.02
MIN/MAX VAR: 1.00/1.00  STDDEV: 0
関連コンテンツ