CentOS Stream 8
Sponsored Link

Ceph Pacific : Cephadm #1 Configure Cluster2021/07/08

 
Configure Ceph Cluster with [Cephadm] that is a Ceph Cluster Deploy tool.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes.
(use [/dev/sdb] on this example)
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]
[Cephadm] uses Python 3 to configure Nodes,
so Install Python 3 on all Nodes, refer to here [1].
[2] [Cephadm] deploys Container based Ceph Cluster,
so Install Podman on all Nodes.
[root@node01 ~]#
dnf module -y reset container-tools

[root@node01 ~]#
dnf module -y enable -y container-tools:3.0

[root@node01 ~]#
dnf module -y install container-tools:3.0/common

[3] Install [Cephadm] on a Node.
(it's [node01] on this example)
[root@node01 ~]#
dnf -y install centos-release-ceph-pacific epel-release
[root@node01 ~]#
dnf -y install cephadm
[4] Bootstrap new Ceph Cluster.
[root@node01 ~]#
mkdir -p /etc/ceph

[root@node01 ~]#
cephadm bootstrap --mon-ip 10.0.0.51

Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman|docker (/usr/bin/podman) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: f492755c-dfa7-11eb-be3f-525400238cef
Verifying IP 10.0.0.51 port 3300 ...
Verifying IP 10.0.0.51 port 6789 ...
Mon IP 10.0.0.51 is in CIDR network 10.0.0.0/24
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image docker.io/ceph/ceph:v16...

.....
.....

Enabling firewalld port 8443/tcp in current zone...
Ceph Dashboard is now available at:

             URL: https://node01.srv.world:8443/
            User: admin
        Password: 0feuzppddy

You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 54952d44-dfaa-11eb-8457-525400238cef -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/pacific/mgr/telemetry/

Bootstrap complete.

# enable Ceph Cli

[root@node01 ~]#
alias ceph='cephadm shell -- ceph'

[root@node01 ~]#
echo "alias ceph='cephadm shell -- ceph'" >> ~/.bashrc
[root@node01 ~]#
ceph -v

ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)
# OK for [HEALTH_WARN] because OSDs are not added yet

[root@node01 ~]#
ceph -s

  cluster:
    id:     54952d44-dfaa-11eb-8457-525400238cef
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum node01 (age 6m)
    mgr: node01.baopmi(active, since 5m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

# containers are running for each service

[root@node01 ~]#
podman ps

CONTAINER ID  IMAGE                                                                                        COMMAND               CREATED        STATUS            PORTS   NAMES
d04ad8657c23  docker.io/ceph/ceph:v16                                                                      -n mon.node01 -f ...  7 minutes ago  Up 7 minutes ago          ceph-54952d44-dfaa-11eb-8457-525400238cef-mon.node01
f48c67036745  docker.io/ceph/ceph:v16                                                                      -n mgr.node01.bao...  7 minutes ago  Up 7 minutes ago          ceph-54952d44-dfaa-11eb-8457-525400238cef-mgr.node01.baopmi
f56a54640c93  docker.io/ceph/ceph@sha256:54e95ae1e11404157d7b329d0bef866ebbb214b195a009e87aae4eba9d282949  -n client.crash.n...  5 minutes ago  Up 5 minutes ago          ceph-54952d44-dfaa-11eb-8457-525400238cef-crash.node01
062660aa0cce  docker.io/prom/node-exporter:v0.18.1                                                         --no-collector.ti...  5 minutes ago  Up 5 minutes ago          ceph-54952d44-dfaa-11eb-8457-525400238cef-node-exporter.node01
b65e5d534027  docker.io/prom/prometheus:v2.18.1                                                            --config.file=/et...  4 minutes ago  Up 4 minutes ago          ceph-54952d44-dfaa-11eb-8457-525400238cef-prometheus.node01
550e2259e584  docker.io/prom/alertmanager:v0.20.0                                                          --web.listen-addr...  4 minutes ago  Up 4 minutes ago          ceph-54952d44-dfaa-11eb-8457-525400238cef-alertmanager.node01
72aac10547c3  docker.io/ceph/ceph-grafana:6.7.4                                                            /bin/bash             4 minutes ago  Up 4 minutes ago          ceph-54952d44-dfaa-11eb-8457-525400238cef-grafana.node01

# systemd service for each containers

[root@node01 ~]#
systemctl status ceph-* --no-pager

*  ceph-54952d44-dfaa-11eb-8457-525400238cef@alertmanager.node01.service - Ceph alertmanager.node01 for 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-07-08 14:10:00 JST; 4min 54s ago
 Main PID: 16560 (conmon)
    Tasks: 12 (limit: 23674)
   Memory: 17.9M
   CGroup: /system.slice/system-ceph\x2d54952d44\x2ddfaa\x2d11eb\x2d8457\x2d525400238cef.slice/ceph-54952d44-dfaa-11eb-8457-525400238cef@alertmanager.node01.service
.....
.....

*  ceph-54952d44-dfaa-11eb-8457-525400238cef@prometheus.node01.service - Ceph prometheus.node01 for 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-07-08 14:09:43 JST; 5min ago
 Main PID: 15087 (conmon)
    Tasks: 10 (limit: 23674)
   Memory: 28.4M
   CGroup: /system.slice/system-ceph\x2d54952d44\x2ddfaa\x2d11eb\x2d8457\x2d525400238cef.slice/ceph-54952d44-dfaa-11eb-8457-525400238cef@prometheus.node01.service
.....
.....

*  ceph-54952d44-dfaa-11eb-8457-525400238cef@crash.node01.service - Ceph crash.node01 for 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-07-08 14:08:46 JST; 6min ago
 Main PID: 13890 (conmon)
    Tasks: 4 (limit: 23674)
   Memory: 7.7M
   CGroup: /system.slice/system-ceph\x2d54952d44\x2ddfaa\x2d11eb\x2d8457\x2d525400238cef.slice/ceph-54952d44-dfaa-11eb-8457-525400238cef@crash.node01.service
.....
.....

*  ceph-54952d44-dfaa-11eb-8457-525400238cef@mon.node01.service - Ceph mon.node01 for 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-07-08 14:07:19 JST; 7min ago
 Main PID: 8248 (conmon)
    Tasks: 2 (limit: 23674)
   Memory: 1.2M
   CGroup: /system.slice/system-ceph\x2d54952d44\x2ddfaa\x2d11eb\x2d8457\x2d525400238cef.slice/ceph-54952d44-dfaa-11eb-8457-525400238cef@mon.node01.service
.....
.....

*  ceph-54952d44-dfaa-11eb-8457-525400238cef@node-exporter.node01.service - Ceph node-exporter.node01 for 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-07-08 14:09:25 JST; 5min ago
 Main PID: 14626 (conmon)
    Tasks: 7 (limit: 23674)
   Memory: 34.5M
   CGroup: /system.slice/system-ceph\x2d54952d44\x2ddfaa\x2d11eb\x2d8457\x2d525400238cef.slice/ceph-54952d44-dfaa-11eb-8457-525400238cef@node-exporter.node01.service
.....
.....

*  ceph-54952d44-dfaa-11eb-8457-525400238cef@grafana.node01.service - Ceph grafana.node01 for 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-07-08 14:10:05 JST; 4min 50s ago
 Main PID: 16934 (conmon)
    Tasks: 13 (limit: 23674)
   Memory: 32.2M
   CGroup: /system.slice/system-ceph\x2d54952d44\x2ddfaa\x2d11eb\x2d8457\x2d525400238cef.slice/ceph-54952d44-dfaa-11eb-8457-525400238cef@grafana.node01.service
.....
.....

+ ceph-54952d44-dfaa-11eb-8457-525400238cef.target - Ceph cluster 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef.target; enabled; vendor preset: disabled)
   Active: active since Thu 2021-07-08 14:07:12 JST; 7min ago

Jul 08 14:07:12 node01 systemd[1]: Reached target Ceph cluster 54952d44-dfa…cef.

*  ceph-54952d44-dfaa-11eb-8457-525400238cef@mgr.node01.baopmi.service - Ceph mgr.node01.baopmi for 54952d44-dfaa-11eb-8457-525400238cef
   Loaded: loaded (/etc/systemd/system/ceph-54952d44-dfaa-11eb-8457-525400238cef@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-07-08 14:07:21 JST; 7min ago
 Main PID: 8528 (conmon)
    Tasks: 2 (limit: 23674)
   Memory: 1.2M
   CGroup: /system.slice/system-ceph\x2d54952d44\x2ddfaa\x2d11eb\x2d8457\x2d525400238cef.slice/ceph-54952d44-dfaa-11eb-8457-525400238cef@mgr.node01.baopmi.service
.....
.....
Matched Content