CentOS 8
Sponsored Link

Ceph Octopus : Cephadm #1 クラスターの設定2020/07/08

 
Ceph クラスター デプロイ ツール [Cephadm] を使用した Ceph クラスターの新規構築です。
当例では 三台 のノードでクラスターを構成します。
三台 のノードにはそれぞれ空きブロックデバイスがあることが前提です。
(当例では [/dev/sdb] を使用)
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]
[Cephadm] はコンテナーベースで Ceph クラスターを構築します。
よって、全ノードに、こちらの [1] を参考に Podman をインストールしておきます
[2]
[Cephadm] から各ノードの制御に Python 3 が必要となります。
よって、全ノードに、こちらの [1] を参考に Python 3 をインストールしておきます
[3] [Monitor Daemon], [Manager Daemon] を設定するノードで [Cephadm] をインストールします。
当例では [node01] で進めます。
[root@node01 ~]#
dnf -y install centos-release-ceph-octopus epel-release
[root@node01 ~]#
dnf -y install cephadm
[4] Ceph クラスターを新規構築します。
[root@node01 ~]#
mkdir -p /etc/ceph

[root@node01 ~]#
cephadm bootstrap --mon-ip 10.0.0.51

INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/podman) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Verifying IP 10.0.0.51 port 3300 ...
INFO:cephadm:Verifying IP 10.0.0.51 port 6789 ...
INFO:cephadm:Mon IP 10.0.0.51 is in CIDR network 10.0.0.0/24
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
.....
.....
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

             URL: https://node01:8443/
            User: admin
        Password: 8vpkdb8a3t

INFO:cephadm:You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 998fbdaa-c00d-11ea-9083-52540067a927 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

# Ceph Cli を有効にする

[root@node01 ~]#
alias ceph='cephadm shell -- ceph'

[root@node01 ~]#
echo "alias ceph='cephadm shell -- ceph'" >> ~/.bashrc
[root@node01 ~]#
ceph -v

ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)
# OSD 未設定のため 現時点では [HEALTH_WARN] で OK

[root@node01 ~]#
ceph -s

  cluster:
    id:     998fbdaa-c00d-11ea-9083-52540067a927
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum node01 (age 3m)
    mgr: node01.yzylhr(active, since 2m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown

# 各役割サービスごとにコンテナーが起動

[root@node01 ~]#
podman ps

CONTAINER ID  IMAGE                                 COMMAND               CREATED             STATUS                 PORTS  NAMES
a386665816b7  docker.io/ceph/ceph-grafana:latest    /bin/bash             About a minute ago  Up About a minute ago         ceph-998fbdaa-c00d-11ea-9083-52540067a927-grafana.node01
0e230521d808  docker.io/prom/alertmanager:v0.20.0   --config.file=/et...  About a minute ago  Up About a minute ago         ceph-998fbdaa-c00d-11ea-9083-52540067a927-alertmanager.node01
d254ee76efb4  docker.io/prom/prometheus:v2.18.1     --config.file=/et...  About a minute ago  Up About a minute ago         ceph-998fbdaa-c00d-11ea-9083-52540067a927-prometheus.node01
c749fea8e7cc  docker.io/prom/node-exporter:v0.18.1  --no-collector.ti...  2 minutes ago       Up 2 minutes ago              ceph-998fbdaa-c00d-11ea-9083-52540067a927-node-exporter.node01
f0a726c5752e  docker.io/ceph/ceph:v15               -n client.crash.n...  2 minutes ago       Up 2 minutes ago              ceph-998fbdaa-c00d-11ea-9083-52540067a927-crash.node01
e0e420a74db8  docker.io/ceph/ceph:v15               -n mgr.node01.yzy...  4 minutes ago       Up 4 minutes ago              ceph-998fbdaa-c00d-11ea-9083-52540067a927-mgr.node01.yzylhr
f5905485a67e  docker.io/ceph/ceph:v15               -n mon.node01 -f ...  4 minutes ago       Up 4 minutes ago              ceph-998fbdaa-c00d-11ea-9083-52540067a927-mon.node01

# 各コンテナー起動用の systemd サービスが起動

[root@node01 ~]#
systemctl status ceph-* --no-pager

*  ceph-998fbdaa-c00d-11ea-9083-52540067a927@mgr.node01.yzylhr.service - Ceph mgr.node01.yzylhr for 998fbdaa-c00d-11ea-9083-52540067a927
   Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-07-06 23:52:53 JST; 4min 21s ago
.....

*  ceph-998fbdaa-c00d-11ea-9083-52540067a927@node-exporter.node01.service - Ceph node-exporter.node01 for 998fbdaa-c00d-11ea-9083-52540067a927
   Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-07-06 23:54:48 JST; 2min 26s ago
.....

*  ceph-998fbdaa-c00d-11ea-9083-52540067a927@crash.node01.service - Ceph crash.node01 for 998fbdaa-c00d-11ea-9083-52540067a927
   Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-07-06 23:54:16 JST; 2min 58s ago
.....

*  ceph-998fbdaa-c00d-11ea-9083-52540067a927@grafana.node01.service - Ceph grafana.node01 for 998fbdaa-c00d-11ea-9083-52540067a927
   Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-07-06 23:55:35 JST; 1min 39s ago
.....

*  ceph-998fbdaa-c00d-11ea-9083-52540067a927@mon.node01.service - Ceph mon.node01 for 998fbdaa-c00d-11ea-9083-52540067a927
   Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-07-06 23:52:51 JST; 4min 24s ago
.....

*  ceph-998fbdaa-c00d-11ea-9083-52540067a927@prometheus.node01.service - Ceph prometheus.node01 for 998fbdaa-c00d-11ea-9083-52540067a927
   Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-07-06 23:55:14 JST; 2min 1s ago
.....

*  ceph-998fbdaa-c00d-11ea-9083-52540067a927@alertmanager.node01.service - Ceph alertmanager.node01 for 998fbdaa-c00d-11ea-9083-52540067a927
   Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-07-06 23:55:31 JST; 1min 44s ago
.....
関連コンテンツ