CentOS Stream 9
Sponsored Link

Ceph Reef : Cephadm #1 Configure Cluster2023/08/21

 
Configure Ceph Cluster with [Cephadm] that is a Ceph Cluster Deploy tool.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes.
(use [/dev/sdb] on this example)
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]
[Cephadm] uses Python 3 to configure Nodes,
so Install Python 3 on all Nodes, refer to here [1].
[2] [Cephadm] deploys Container based Ceph Cluster,
so Install Podman on all Nodes.
[root@node01 ~]#
dnf -y install podman

[3] Install [Cephadm] on a Node.
(it's [node01] on this example)
[root@node01 ~]#
dnf -y install centos-release-ceph-reef epel-release
[root@node01 ~]#
dnf -y install cephadm
[4] Bootstrap new Ceph Cluster.
[root@node01 ~]#
mkdir -p /etc/ceph

[root@node01 ~]#
cephadm bootstrap --mon-ip 10.0.0.51 --allow-fqdn-hostname

Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.7.0 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: cc27a71e-3fda-11ee-99e9-5254000f7c42
Verifying IP 10.0.0.51 port 3300 ...
Verifying IP 10.0.0.51 port 6789 ...
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v18...

.....
.....

firewalld ready
Ceph Dashboard is now available at:

             URL: https://node01.srv.world:8443/
            User: admin
        Password: m4npkoh4yj

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /usr/sbin/cephadm shell --fsid cc27a71e-3fda-11ee-99e9-5254000f7c42 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /usr/sbin/cephadm shell

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/en/latest/mgr/telemetry/

Bootstrap complete.

# enable Ceph Cli

[root@node01 ~]#
alias ceph='cephadm shell -- ceph'

[root@node01 ~]#
echo "alias ceph='cephadm shell -- ceph'" >> ~/.bash_profile
[root@node01 ~]#
ceph -v

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)

# OK for [HEALTH_WARN] because OSDs are not added yet

[root@node01 ~]#
ceph -s

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
  cluster:
    id:     cc27a71e-3fda-11ee-99e9-5254000f7c42
    health: HEALTH_WARN
            OSD count 0 > osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum node01 (age 2m)
    mgr: node01.lvvxbq(active, since 40s)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

# containers are running for each service

[root@node01 ~]#
podman ps

CONTAINER ID  IMAGE                                                                                      COMMAND               CREATED             STATUS             PORTS       NAMES
b56682b2ec96  quay.io/ceph/ceph:v18                                                                      -n mon.node01 -f ...  2 minutes ago       Up 2 minutes                   ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-mon-node01
db0455103ef6  quay.io/ceph/ceph:v18                                                                      -n mgr.node01.lvv...  2 minutes ago       Up 2 minutes                   ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-mgr-node01-lvvxbq
96b6eb87eba4  quay.io/ceph/ceph@sha256:90706e97b3abcc1902442f83f478f5eabe9ff730946f187dcb4ea82694b3870a  -n client.ceph-ex...  2 minutes ago       Up 2 minutes                   ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-ceph-exporter-node01
2b74c7bf31d4  quay.io/ceph/ceph@sha256:90706e97b3abcc1902442f83f478f5eabe9ff730946f187dcb4ea82694b3870a  -n client.crash.n...  2 minutes ago       Up 2 minutes                   ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-crash-node01
231064ecd9f3  quay.io/prometheus/node-exporter:v1.5.0                                                    --no-collector.ti...  About a minute ago  Up About a minute              ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-node-exporter-node01
3dec7289b325  quay.io/prometheus/prometheus:v2.43.0                                                      --config.file=/et...  About a minute ago  Up About a minute              ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-prometheus-node01
b22f9b967e53  quay.io/prometheus/alertmanager:v0.25.0                                                    --cluster.listen-...  About a minute ago  Up About a minute              ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-alertmanager-node01
7dc98a4e093f  quay.io/ceph/ceph-grafana:9.4.7                                                            /bin/bash             About a minute ago  Up About a minute              ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42-grafana-node01

# systemd service for each containers

[root@node01 ~]#
systemctl status ceph-* --no-pager

*  ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42@node-exporter.node01.service - Ceph node-exporter.node01 for cc27a71e-3fda-11ee-99e9-5254000f7c42
     Loaded: loaded (/etc/systemd/system/ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42@.service; enabled; preset: disabled)
     Active: active (running) since Mon 2023-08-21 13:26:58 JST; 2min 21s ago
   Main PID: 8384 (conmon)
      Tasks: 6 (limit: 23077)
     Memory: 37.9M
        CPU: 987ms
     CGroup: /system.slice/system-ceph\x2dcc27a71e\x2d3fda\x2d11ee\x2d99e9\x2d5254000f7c42.slice/ceph-cc27a71e-3fda-11ee-99e9-5254000f7c42@node-exporter.node01.service
.....
.....
Matched Content