openSUSE Leap 16

Ceph Reef : Cephadm #1 クラスターの設定2025/10/28

 

Ceph クラスター デプロイ ツール [Cephadm] を使用した Ceph クラスターの新規構築です。

当例では 3 台 のノードでクラスターを構成します。
3 台 のノードにはそれぞれ空きブロックデバイスがあることが前提です。
(当例では [/dev/vdb] を使用)

                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]

こちらを参考に SSH サーバーを起動しておきます

[2] [Cephadm] はコンテナーベースで Ceph クラスターを構築します。
よって、全ノードに Podman をインストールしておきます。
node01:~ #
zypper -n install podman python313

[3] [Monitor Daemon], [Manager Daemon] を設定するノードで [Cephadm] をインストールします。
当例では [node01] で進めます。
node01:~ #
zypper -n install cephadm python313-jinja2
[4] Ceph クラスターを新規構築します。
node01:~ #
mkdir -p /etc/ceph

node01:~ #
cephadm bootstrap --mon-ip 10.0.0.51 --allow-fqdn-hostname

Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 5.4.2 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 9b721a0a-b3b4-11f0-9dbf-5254009c079b
Verifying IP 10.0.0.51 port 3300 ...
Verifying IP 10.0.0.51 port 6789 ...
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v18...

.....
.....

firewalld ready
Ceph Dashboard is now available at:

             URL: https://node01.srv.world:8443/
            User: admin
        Password: 6vym544py4

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /sbin/cephadm shell --fsid c67e76d2-b3b7-11f0-be4b-5254009c079b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.

# Ceph Cli を有効にする

node01:~ #
alias ceph='cephadm shell -- ceph'

node01:~ #
echo "alias ceph='cephadm shell -- ceph'" >> ~/.bash_profile
node01:~ #
ceph -v

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)

# OSD 未設定のため 現時点では [HEALTH_WARN] で OK

node01:~ #
ceph -s

Inferring fsid c67e76d2-b3b7-11f0-be4b-5254009c079b
Inferring config /var/lib/ceph/c67e76d2-b3b7-11f0-be4b-5254009c079b/mon.node01/config
Using ceph image with id '0f5473a1e726' and tag 'v18' created on 2025-05-07 17:48:39 +0000 UTC
quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
  cluster:
    id:     c67e76d2-b3b7-11f0-be4b-5254009c079b
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum node01 (age 3m)
    mgr: node01.zrnnru(active, since 75s)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

# 各役割サービスごとにコンテナーが起動

node01:~ #
podman ps

CONTAINER ID  IMAGE                                                                                      COMMAND               CREATED             STATUS             PORTS       NAMES
14d9bee40f93  quay.io/ceph/ceph:v18                                                                      -n mon.node01 -f ...  4 minutes ago       Up 4 minutes                   ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-mon-node01
5411b8f532c0  quay.io/ceph/ceph:v18                                                                      -n mgr.node01.zrn...  4 minutes ago       Up 4 minutes                   ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-mgr-node01-zrnnru
5d59c82be73c  quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0  -n client.ceph-ex...  3 minutes ago       Up 3 minutes                   ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-ceph-exporter-node01
25accdfb2d4d  quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0  -n client.crash.n...  2 minutes ago       Up 2 minutes                   ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-crash-node01
a42b96370fd6  quay.io/prometheus/node-exporter:v1.5.0                                                    --no-collector.ti...  2 minutes ago       Up 2 minutes       9100/tcp    ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-node-exporter-node01
9bded4b39757  quay.io/prometheus/prometheus:v2.43.0                                                      --config.file=/et...  2 minutes ago       Up 2 minutes       9090/tcp    ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-prometheus-node01
70559c4e3fbc  quay.io/prometheus/alertmanager:v0.25.0                                                    --cluster.listen-...  About a minute ago  Up About a minute  9093/tcp    ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-alertmanager-node01
dce10ee57e32  quay.io/ceph/ceph-grafana:9.4.7                                                            /bin/bash             About a minute ago  Up About a minute  3000/tcp    ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b-grafana-node01

# 各コンテナー起動用の systemd サービスが起動

node01:~ #
systemctl status ceph-* --no-pager

● ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b@mon.node01.service - Ceph mon.node01 for c67e76d2-b3b7-11f0-be4b-5254009c079b
     Loaded: loaded (/etc/systemd/system/ceph-c67e76d2-b3b7-11f0-be4b-5254009c079b@.service; enabled; preset: disabled)
     Active: active (running) since Tue 2025-10-28 13:37:39 JST; 4min 49s ago
 Invocation: fd9859a1f8534477a24985934bfafd46
   Main PID: 3518 (conmon)
      Tasks: 26 (limit: 4662)
        CPU: 1.864s
     CGroup: /system.slice/system-ceph\x2dc67e76d2\x2db3b7\x2d11f0\x2dbe4b\x2d5254009c079b.slice/ceph-c67e76d2-b3b7-11f0-be4b-52
.....
.....
関連コンテンツ