Ceph Octopus : Cephadm #1 Configure Cluster2021/04/01 |
Configure Ceph Cluster with [Cephadm] that is a Ceph Cluster Deploy tool.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes. (use [/dev/sdb] on this example) | +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
[1] |
[Cephadm] deploys Container based Ceph Cluster,
so Install Podman on all Nodes, refer to here [1]. |
[2] |
[Cephadm] uses Python 3 to configure Nodes,
so Install Python 3 on all Nodes, refer to here [1]. |
[3] | Install [Cephadm] on a Node. (install it to [node01] on this example) |
[root@node01 ~]#
[root@node01 ~]# dnf -y install centos-release-ceph-octopus epel-release
dnf -y install cephadm
|
[4] | Bootstrap new Ceph Cluster. |
[root@node01 ~]# mkdir -p /etc/ceph [root@node01 ~]# cephadm bootstrap --mon-ip 10.0.0.51 Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit chronyd.service is enabled and running Repeating the final host check... podman|docker (/usr/bin/podman) is present systemctl is present lvcreate is present Unit chronyd.service is enabled and running Host looks OK Cluster fsid: 07578f30-91f9-11eb-af18-52540028a696 Verifying IP 10.0.0.51 port 3300 ... Verifying IP 10.0.0.51 port 6789 ... Mon IP 10.0.0.51 is in CIDR network 10.0.0.0/24 Pulling container image docker.io/ceph/ceph:v15... ..... ..... Enabling firewalld port 8443/tcp in current zone... Ceph Dashboard is now available at: URL: https://node01:8443/ User: admin Password: 6cborfqu8w You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 3e2ca3dc-91f5-11eb-84e8-52540028a696 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete. # enable Ceph Cli [root@node01 ~]# alias ceph='cephadm shell -- ceph' [root@node01 ~]# echo "alias ceph='cephadm shell -- ceph'" >> ~/.bashrc
[root@node01 ~]#
ceph -v ceph version 15.2.10 (27917a557cca91e4da407489bbaa64ad4352cc02) octopus (stable) # OK for [HEALTH_WARN] because OSDs are not added yet [root@node01 ~]# ceph -s cluster: id: 3e2ca3dc-91f5-11eb-84e8-52540028a696 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum node01 (age 3m) mgr: node01.pzboml(active, since 2m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: # containers are running for each service [root@node01 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0b6c8ecb12f1 docker.io/ceph/ceph:v15 -n mon.node01 -f ... 3 minutes ago Up 3 minutes ago ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-mon.node01 b6c79160f5b5 docker.io/ceph/ceph:v15 -n mgr.node01.pzb... 3 minutes ago Up 3 minutes ago ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-mgr.node01.pzboml 751e30561b36 docker.io/ceph/ceph:v15 -n client.crash.n... 2 minutes ago Up 2 minutes ago ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-crash.node01 a5be37de3fad docker.io/prom/node-exporter:v0.18.1 --no-collector.ti... About a minute ago Up About a minute ago ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-node-exporter.node01 e6a5f49f6228 docker.io/prom/prometheus:v2.18.1 --config.file=/et... About a minute ago Up About a minute ago ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-prometheus.node01 6dafc50f0126 docker.io/prom/alertmanager:v0.20.0 --web.listen-addr... About a minute ago Up About a minute ago ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-alertmanager.node01 66cb60c2d8fe docker.io/ceph/ceph-grafana:6.7.4 /bin/bash About a minute ago Up About a minute ago ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-grafana.node01 # systemd service for each containers [root@node01 ~]# systemctl status ceph-* --no-pager * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@grafana.node01.service - Ceph grafana.node01 for 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-31 16:50:13 JST; 1min 34s ago Process: 12296 ExecStopPost=/bin/rm -f //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@grafana.node01.service-pid //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@grafana.node01.service-cid (code=exited, status=0/SUCCESS) Process: 12295 ExecStopPost=/bin/bash /var/lib/ceph/3e2ca3dc-91f5-11eb-84e8-52540028a696/grafana.node01/unit.poststop (code=exited, status=0/SUCCESS) Process: 12228 ExecStop=/bin/podman stop ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-grafana.node01 (code=exited, status=0/SUCCESS) Process: 12312 ExecStart=/bin/bash /var/lib/ceph/3e2ca3dc-91f5-11eb-84e8-52540028a696/grafana.node01/unit.run (code=exited, status=0/SUCCESS) Process: 12310 ExecStartPre=/bin/rm -f //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@grafana.node01.service-pid //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@grafana.node01.service-cid (code=exited, status=0/SUCCESS) Main PID: 12401 (conmon) Tasks: 2 (limit: 23673) Memory: 1.2M CGroup: /system.slice/system-ceph\x2d3e2ca3dc\x2d91f5\x2d11eb\x2d84e8\x2d52540028a696.slice/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@grafana.node01.service +- 12401 /usr/bin/conmon --api-version 1 -c 66cb60c2d8fedea50fd3ec0eb… * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@prometheus.node01.service - Ceph prometheus.node01 for 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-31 16:49:52 JST; 1min 56s ago Process: 10689 ExecStart=/bin/bash /var/lib/ceph/3e2ca3dc-91f5-11eb-84e8-52540028a696/prometheus.node01/unit.run (code=exited, status=0/SUCCESS) Process: 10687 ExecStartPre=/bin/rm -f //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@prometheus.node01.service-pid //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@prometheus.node01.service-cid (code=exited, status=0/SUCCESS) Main PID: 10777 (conmon) Tasks: 2 (limit: 23673) Memory: 1.1M CGroup: /system.slice/system-ceph\x2d3e2ca3dc\x2d91f5\x2d11eb\x2d84e8\x2d52540028a696.slice/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@prometheus.node01.service +- 10777 /usr/bin/conmon --api-version 1 -c e6a5f49f62283b14876162534… * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@alertmanager.node01.service - Ceph alertmanager.node01 for 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-31 16:50:09 JST; 1min 39s ago Process: 11862 ExecStopPost=/bin/rm -f //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@alertmanager.node01.service-pid //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@alertmanager.node01.service-cid (code=exited, status=0/SUCCESS) Process: 11861 ExecStopPost=/bin/bash /var/lib/ceph/3e2ca3dc-91f5-11eb-84e8-52540028a696/alertmanager.node01/unit.poststop (code=exited, status=0/SUCCESS) Process: 11815 ExecStop=/bin/podman stop ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696-alertmanager.node01 (code=exited, status=0/SUCCESS) Process: 11899 ExecStart=/bin/bash /var/lib/ceph/3e2ca3dc-91f5-11eb-84e8-52540028a696/alertmanager.node01/unit.run (code=exited, status=0/SUCCESS) Process: 11897 ExecStartPre=/bin/rm -f //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@alertmanager.node01.service-pid //run/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@alertmanager.node01.service-cid (code=exited, status=0/SUCCESS) Main PID: 11989 (conmon) Tasks: 2 (limit: 23673) Memory: 1.1M CGroup: /system.slice/system-ceph\x2d3e2ca3dc\x2d91f5\x2d11eb\x2d84e8\x2d52540028a696.slice/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@alertmanager.node01.service +- 11989 /usr/bin/conmon --api-version 1 -c 6dafc50f01261971f53ab2246… * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@mon.node01.service - Ceph mon.node01 for 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-31 16:47:37 JST; 4min 10s ago Main PID: 3981 (conmon) Tasks: 2 (limit: 23673) Memory: 1.3M CGroup: /system.slice/system-ceph\x2d3e2ca3dc\x2d91f5\x2d11eb\x2d84e8\x2d52540028a696.slice/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@mon.node01.service +- 3981 /usr/bin/conmon --api-version 1 -c 0b6c8ecb12f1f5a8313361c3ff… * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@mgr.node01.pzboml.service - Ceph mgr.node01.pzboml for 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-31 16:47:39 JST; 4min 8s ago Main PID: 4294 (conmon) Tasks: 2 (limit: 23673) Memory: 1.2M CGroup: /system.slice/system-ceph\x2d3e2ca3dc\x2d91f5\x2d11eb\x2d84e8\x2d52540028a696.slice/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@mgr.node01.pzboml.service +- 4294 /usr/bin/conmon --api-version 1 -c b6c79160f5b5256ebf947097f4… * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@crash.node01.service - Ceph crash.node01 for 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-31 16:48:58 JST; 2min 49s ago Main PID: 9754 (conmon) Tasks: 2 (limit: 23673) Memory: 1.1M CGroup: /system.slice/system-ceph\x2d3e2ca3dc\x2d91f5\x2d11eb\x2d84e8\x2d52540028a696.slice/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@crash.node01.service +- 9754 /usr/bin/conmon --api-version 1 -c 751e30561b36c3529e7de9ce66… * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@node-exporter.node01.service - Ceph node-exporter.node01 for 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-31 16:49:34 JST; 2min 13s ago Main PID: 10333 (conmon) Tasks: 2 (limit: 23673) Memory: 27.8M CGroup: /system.slice/system-ceph\x2d3e2ca3dc\x2d91f5\x2d11eb\x2d84e8\x2d52540028a696.slice/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696@node-exporter.node01.service +- 10333 /usr/bin/conmon --api-version 1 -c a5be37de3fad54f217410e0d4… * ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696.target - Ceph cluster 3e2ca3dc-91f5-11eb-84e8-52540028a696 Loaded: loaded (/etc/systemd/system/ceph-3e2ca3dc-91f5-11eb-84e8-52540028a696.target; enabled; vendor preset: disabled) Active: active since Wed 2021-03-31 16:47:30 JST; 4min 18s ago |
Sponsored Link |
|