Ceph Octopus : Cephadm #1 Configure Cluster2020/07/08 |
Configure Ceph Cluster with [Cephadm] that is a Ceph Cluster Deploy tool.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes. (use [/dev/sdb] on this example) | +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
[1] |
[Cephadm] deploys Container based Ceph Cluster,
so Install Podman on all Nodes, refer to here [1]. |
[2] |
[Cephadm] uses Python 3 to configure Nodes,
so Install Python 3 on all Nodes, refer to here [1]. |
[3] | Install [Cephadm] on a Node. (install it to [node01] on this example) |
[root@node01 ~]#
[root@node01 ~]# dnf -y install centos-release-ceph-octopus epel-release
dnf -y install cephadm
|
[4] | Bootstrap new Ceph Cluster. |
[root@node01 ~]# mkdir -p /etc/ceph [root@node01 ~]# cephadm bootstrap --mon-ip 10.0.0.51 INFO:cephadm:Verifying podman|docker is present... INFO:cephadm:Verifying lvm2 is present... INFO:cephadm:Verifying time synchronization is in place... INFO:cephadm:Unit chronyd.service is enabled and running INFO:cephadm:Repeating the final host check... INFO:cephadm:podman|docker (/usr/bin/podman) is present INFO:cephadm:systemctl is present INFO:cephadm:lvcreate is present INFO:cephadm:Unit chronyd.service is enabled and running INFO:cephadm:Host looks OK INFO:root:Cluster fsid: 998fbdaa-c00d-11ea-9083-52540067a927 INFO:cephadm:Verifying IP 10.0.0.51 port 3300 ... INFO:cephadm:Verifying IP 10.0.0.51 port 6789 ... INFO:cephadm:Mon IP 10.0.0.51 is in CIDR network 10.0.0.0/24 INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container... ..... ..... INFO:cephadm:Fetching dashboard port number... INFO:cephadm:Ceph Dashboard is now available at: URL: https://node01:8443/ User: admin Password: 8vpkdb8a3t INFO:cephadm:You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 998fbdaa-c00d-11ea-9083-52540067a927 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring INFO:cephadm:Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ INFO:cephadm:Bootstrap complete. # enable Ceph Cli [root@node01 ~]# alias ceph='cephadm shell -- ceph' [root@node01 ~]# echo "alias ceph='cephadm shell -- ceph'" >> ~/.bashrc
[root@node01 ~]#
ceph -v ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable) # OK for [HEALTH_WARN] because OSDs are not added yet [root@node01 ~]# ceph -s cluster: id: 998fbdaa-c00d-11ea-9083-52540067a927 health: HEALTH_WARN Reduced data availability: 1 pg inactive OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum node01 (age 3m) mgr: node01.yzylhr(active, since 2m) osd: 0 osds: 0 up, 0 in data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 1 unknown # containers are running for each service [root@node01 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a386665816b7 docker.io/ceph/ceph-grafana:latest /bin/bash About a minute ago Up About a minute ago ceph-998fbdaa-c00d-11ea-9083-52540067a927-grafana.node01 0e230521d808 docker.io/prom/alertmanager:v0.20.0 --config.file=/et... About a minute ago Up About a minute ago ceph-998fbdaa-c00d-11ea-9083-52540067a927-alertmanager.node01 d254ee76efb4 docker.io/prom/prometheus:v2.18.1 --config.file=/et... About a minute ago Up About a minute ago ceph-998fbdaa-c00d-11ea-9083-52540067a927-prometheus.node01 c749fea8e7cc docker.io/prom/node-exporter:v0.18.1 --no-collector.ti... 2 minutes ago Up 2 minutes ago ceph-998fbdaa-c00d-11ea-9083-52540067a927-node-exporter.node01 f0a726c5752e docker.io/ceph/ceph:v15 -n client.crash.n... 2 minutes ago Up 2 minutes ago ceph-998fbdaa-c00d-11ea-9083-52540067a927-crash.node01 e0e420a74db8 docker.io/ceph/ceph:v15 -n mgr.node01.yzy... 4 minutes ago Up 4 minutes ago ceph-998fbdaa-c00d-11ea-9083-52540067a927-mgr.node01.yzylhr f5905485a67e docker.io/ceph/ceph:v15 -n mon.node01 -f ... 4 minutes ago Up 4 minutes ago ceph-998fbdaa-c00d-11ea-9083-52540067a927-mon.node01 # systemd service for each containers [root@node01 ~]# systemctl status ceph-* --no-pager * ceph-998fbdaa-c00d-11ea-9083-52540067a927@mgr.node01.yzylhr.service - Ceph mgr.node01.yzylhr for 998fbdaa-c00d-11ea-9083-52540067a927 Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-07-06 23:52:53 JST; 4min 21s ago ..... * ceph-998fbdaa-c00d-11ea-9083-52540067a927@node-exporter.node01.service - Ceph node-exporter.node01 for 998fbdaa-c00d-11ea-9083-52540067a927 Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-07-06 23:54:48 JST; 2min 26s ago ..... * ceph-998fbdaa-c00d-11ea-9083-52540067a927@crash.node01.service - Ceph crash.node01 for 998fbdaa-c00d-11ea-9083-52540067a927 Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-07-06 23:54:16 JST; 2min 58s ago ..... * ceph-998fbdaa-c00d-11ea-9083-52540067a927@grafana.node01.service - Ceph grafana.node01 for 998fbdaa-c00d-11ea-9083-52540067a927 Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-07-06 23:55:35 JST; 1min 39s ago ..... * ceph-998fbdaa-c00d-11ea-9083-52540067a927@mon.node01.service - Ceph mon.node01 for 998fbdaa-c00d-11ea-9083-52540067a927 Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-07-06 23:52:51 JST; 4min 24s ago ..... * ceph-998fbdaa-c00d-11ea-9083-52540067a927@prometheus.node01.service - Ceph prometheus.node01 for 998fbdaa-c00d-11ea-9083-52540067a927 Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-07-06 23:55:14 JST; 2min 1s ago ..... * ceph-998fbdaa-c00d-11ea-9083-52540067a927@alertmanager.node01.service - Ceph alertmanager.node01 for 998fbdaa-c00d-11ea-9083-52540067a927 Loaded: loaded (/etc/systemd/system/ceph-998fbdaa-c00d-11ea-9083-52540067a927@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-07-06 23:55:31 JST; 1min 44s ago ..... |
Sponsored Link |
|