CentOS Stream 9

Ceph Tentacle : Cephadm #1 クラスターの設定2025/09/25

 

Ceph クラスター デプロイ ツール [Cephadm] を使用した Ceph クラスターの新規構築です。

当例では 3 台 のノードでクラスターを構成します。
3 台 のノードにはそれぞれ空きブロックデバイスがあることが前提です。
(当例では [/dev/sdb] を使用)

                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]

[Cephadm] から各ノードの制御に Python 3 が必要となります。
よって、全ノードに、こちらの [1] を参考に Python 3 をインストールしておきます

[2] [Cephadm] はコンテナーベースで Ceph クラスターを構築します。
よって、全ノードに Podman をインストールしておきます。
[root@node01 ~]#
dnf -y install podman

[3] [Monitor Daemon], [Manager Daemon] を設定するノードで [Cephadm] をインストールします。
当例では [node01] で進めます。
[root@node01 ~]#
dnf -y install centos-release-ceph-tentacle epel-release python3-jinja2
[root@node01 ~]#
dnf -y install cephadm
[4] Ceph クラスターを新規構築します。
[root@node01 ~]#
mkdir -p /etc/ceph

[root@node01 ~]#
cephadm bootstrap --mon-ip 10.0.0.51 --allow-fqdn-hostname

This is a development version of cephadm.
For information regarding the latest stable release:
    https://docs.ceph.com/docs/tentacle/cephadm/install
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 5.6.0 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: a103b16a-99b8-11f0-ac94-5254000f7c42
Verifying IP 10.0.0.51 port 3300 ...
Verifying IP 10.0.0.51 port 6789 ...
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.ceph.io/ceph-ci/ceph:main...

.....
.....

firewalld ready
Ceph Dashboard is now available at:

             URL: https://node01.srv.world:8443/
            User: admin
        Password: cevpqjd7ai

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/config directory
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /usr/sbin/cephadm shell --fsid a103b16a-99b8-11f0-ac94-5254000f7c42 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /usr/sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.
Enabling the logrotate.timer service to perform daily log rotation.

# Ceph Cli を有効にする

[root@node01 ~]#
alias ceph='cephadm shell -- ceph'

[root@node01 ~]#
echo "alias ceph='cephadm shell -- ceph'" >> ~/.bash_profile
[root@node01 ~]#
ceph -v

Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42
Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config
ceph version 20.3.0-3251-gc9ce5bf4 (c9ce5bf4e30f48e0faab541f7f1b46b8fe029e04) tentacle (dev)

# OSD 未設定のため 現時点では [HEALTH_WARN] で OK

[root@node01 ~]#
ceph -s

Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42
Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config
  cluster:
    id:     a103b16a-99b8-11f0-ac94-5254000f7c42
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum node01 (age 114s) [leader: node01]
    mgr: node01.scbkhl(active, since 12s)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

# 各役割サービスごとにコンテナーが起動

[root@node01 ~]#
podman ps

CONTAINER ID  IMAGE                                                                                              COMMAND               CREATED             STATUS             PORTS       NAMES
47e470187fa3  quay.ceph.io/ceph-ci/ceph:main                                                                     -n mon.node01 -f ...  2 minutes ago       Up 2 minutes                   ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-mon-node01
37e3ae0843dc  quay.ceph.io/ceph-ci/ceph:main                                                                     -n mgr.node01.scb...  2 minutes ago       Up 2 minutes                   ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-mgr-node01-scbkhl
22375d8cd75f  quay.ceph.io/ceph-ci/ceph@sha256:c7dac7010bc8d4867022d597c5b03b40be62761a86444c41aec31dc8d564549f  -n client.ceph-ex...  About a minute ago  Up About a minute              ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-ceph-exporter-node01
fddeefe116cf  quay.ceph.io/ceph-ci/ceph@sha256:c7dac7010bc8d4867022d597c5b03b40be62761a86444c41aec31dc8d564549f  -n client.crash.n...  About a minute ago  Up About a minute              ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-crash-node01
27ac2f649746  quay.io/prometheus/node-exporter:v1.7.0                                                            --no-collector.ti...  About a minute ago  Up About a minute  9100/tcp    ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-node-exporter-node01
047a9cca8248  quay.io/prometheus/prometheus:v2.51.0                                                              --config.file=/et...  51 seconds ago      Up 52 seconds      9090/tcp    ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-prometheus-node01
f5f310340b73  quay.io/prometheus/alertmanager:v0.27.0                                                            --web.listen-addr...  36 seconds ago      Up 36 seconds      9093/tcp    ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-alertmanager-node01
d1cea19a6a0b  quay.io/ceph/grafana:11.6.0                                                                                              34 seconds ago      Up 35 seconds      3000/tcp    ceph-a103b16a-99b8-11f0-ac94-5254000f7c42-grafana-node01

# 各コンテナー起動用の systemd サービスが起動

[root@node01 ~]#
systemctl status ceph-* --no-pager

● ceph-a103b16a-99b8-11f0-ac94-5254000f7c42@mon.node01.service - Ceph mon.node01 for a103b16a-99b8-11f0-ac94-5254000f7c42
     Loaded: loaded (/etc/systemd/system/ceph-a103b16a-99b8-11f0-ac94-5254000f7c42@.service; enabled; preset: disabled)
     Active: active (running) since Thu 2025-09-25 11:39:52 JST; 2min 51s ago
   Main PID: 1914 (conmon)
      Tasks: 26 (limit: 23100)
     Memory: 43.6M (peak: 56.5M)
        CPU: 1.257s
     CGroup: /system.slice/system-ceph\x2da103b16a\x2d99b8\x2d11f0\x2dac94\x2d5254000f7c42.slice/ceph-a103b16a-99b8-11f0-ac94-5254000f7c42@mon.node01.service
.....
.....
関連コンテンツ