CentOS Stream 8
Sponsored Link

Ceph Quincy : Cephadm #1 クラスターの設定2022/06/14

 
Ceph クラスター デプロイ ツール [Cephadm] を使用した Ceph クラスターの新規構築です。
当例では 3 台 のノードでクラスターを構成します。
3 台 のノードにはそれぞれ空きブロックデバイスがあることが前提です。
(当例では [/dev/sdb] を使用)
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]
[Cephadm] から各ノードの制御に Python 3 が必要となります。
よって、全ノードに、こちらの [1] を参考に Python 3 をインストールしておきます
[2] [Cephadm] はコンテナーベースで Ceph クラスターを構築します。
よって、全ノードに Podman 等々をインストールしておきます。
[root@node01 ~]#
dnf module -y reset container-tools

[root@node01 ~]#
dnf module -y enable -y container-tools:4.0

[root@node01 ~]#
dnf module -y install container-tools:4.0/common

[3] [Monitor Daemon], [Manager Daemon] を設定するノードで [Cephadm] をインストールします。
当例では [node01] で進めます。
[root@node01 ~]#
dnf -y install centos-release-ceph-quincy epel-release
[root@node01 ~]#
dnf -y install cephadm
[4] Ceph クラスターを新規構築します。
[root@node01 ~]#
mkdir -p /etc/ceph

[root@node01 ~]#
cephadm bootstrap --mon-ip 10.0.0.51 --allow-fqdn-hostname

Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.0.0 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 1d53a99e-eba9-11ec-96f5-525400ba92a3
Verifying IP 10.0.0.51 port 3300 ...
Verifying IP 10.0.0.51 port 6789 ...
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...

.....
.....

Enabling firewalld port 8443/tcp in current zone...
Ceph Dashboard is now available at:

             URL: https://node01.srv.world:8443/
            User: admin
        Password: wtaufz2oz5

Enabling client.admin keyring and conf on hosts with "admin" label
Enabling autotune for osd_memory_target
You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 1d53a99e-eba9-11ec-96f5-525400ba92a3 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.

# Ceph Cli を有効にする

[root@node01 ~]#
alias ceph='cephadm shell -- ceph'

[root@node01 ~]#
echo "alias ceph='cephadm shell -- ceph'" >> ~/.bash_profile
[root@node01 ~]#
ceph -v

Inferring fsid 1d53a99e-eba9-11ec-96f5-525400ba92a3
Using recent ceph image quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a
ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)

# OSD 未設定のため 現時点では [HEALTH_WARN] で OK

[root@node01 ~]#
ceph -s

Inferring fsid 1d53a99e-eba9-11ec-96f5-525400ba92a3
Using recent ceph image quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a
  cluster:
    id:     1d53a99e-eba9-11ec-96f5-525400ba92a3
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum node01.srv.world (age 3m)
    mgr: node01.srv.world.iqrcwq(active, since 2m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

# 各役割サービスごとにコンテナーが起動

[root@node01 ~]#
podman ps

CONTAINER ID  IMAGE                                                                                      COMMAND               CREATED        STATUS            PORTS       NAMES
60891865f428  quay.io/ceph/ceph:v17                                                                      -n mon.node01.srv...  4 minutes ago  Up 4 minutes ago              ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3-mon-node01-srv-world
4a2f2f1cf65d  quay.io/ceph/ceph:v17                                                                      -n mgr.node01.srv...  4 minutes ago  Up 4 minutes ago              ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3-mgr-node01-srv-world-iqrcwq
6a95ee412380  quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a  -n client.crash.n...  3 minutes ago  Up 3 minutes ago              ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3-crash-node01
eefca8e982df  quay.io/prometheus/node-exporter:v1.3.1                                                    --no-collector.ti...  2 minutes ago  Up 2 minutes ago              ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3-node-exporter-node01
a26deaf62112  quay.io/prometheus/prometheus:v2.33.4                                                      --config.file=/et...  2 minutes ago  Up 2 minutes ago              ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3-prometheus-node01
13ff0ae4b220  quay.io/prometheus/alertmanager:v0.23.0                                                    --cluster.listen-...  2 minutes ago  Up 2 minutes ago              ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3-alertmanager-node01
3007d51f9ce7  quay.io/ceph/ceph-grafana:8.3.5                                                            /bin/bash             2 minutes ago  Up 2 minutes ago              ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3-grafana-node01

# 各コンテナー起動用の systemd サービスが起動

[root@node01 ~]#
systemctl status ceph-* --no-pager

*  ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3@crash.node01.service - Ceph crash.node01 for 1d53a99e-eba9-11ec-96f5-525400ba92a3
   Loaded: loaded (/etc/systemd/system/ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-06-14 15:15:18 JST; 3min 54s ago
 Main PID: 11297 (conmon)
    Tasks: 4 (limit: 101040)
   Memory: 7.8M
   CGroup: /system.slice/system-ceph\x2d1d53a99e\x2deba9\x2d11ec\x2d96f5\x2d525400ba92a3.slice/ceph-1d53a99e-eba9-11ec-96f5-525400ba92a3@crash.node01.service
.....
.....
関連コンテンツ