Ceph Tentacle : Cephadm #2 Configure Cluster2025/09/25 |
|
Configure Ceph Cluster with [Cephadm] that is a Ceph Cluster Deploy tool.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
|
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| Manager Daemon | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
|
| [1] |
Configure basic Cluster settings with [Cephadm], refer to here. |
| [2] | To add Nodes to Cluster, run like follows. For example, add [node02], [node03]. |
|
# transfer SSH public key [root@node01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node02 Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@node02'" and check to make sure that only the key(s) you wanted were added.[root@node01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node03 Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@node03'" and check to make sure that only the key(s) you wanted were added. # add target Nodes to Cluster [root@node01 ~]# ceph orch host add node02.srv.world Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config Added host 'node02.srv.world' with addr '10.0.0.52'[root@node01 ~]# ceph orch host add node03.srv.world Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config Added host 'node03.srv.world' with addr '10.0.0.53'[root@node01 ~]# ceph orch host ls Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config HOST ADDR LABELS STATUS node01.srv.world 10.0.0.51 _admin node02.srv.world 10.0.0.52 node03.srv.world 10.0.0.53 3 hosts in cluster |
| [3] | To configure OSD, run like follows. For example, configure [node01], [node02], [node03]. |
|
# list available devices # possible to set if [AVAIL = True] [root@node01 ~]# ceph orch device ls Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS node01.srv.world /dev/vdb hdd 160G Yes 9m ago node02.srv.world /dev/vdb hdd 160G Yes 2m ago node03.srv.world /dev/vdb hdd 160G Yes 3s ago # configure OSD [root@node01 ~]# ceph orch daemon add osd node01.srv.world:/dev/vdb Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config Created osd(s) 0 on host 'node01.srv.world'[root@node01 ~]# ceph orch daemon add osd node02.srv.world:/dev/vdb Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config Created osd(s) 1 on host 'node02.srv.world'[root@node01 ~]# ceph orch daemon add osd node03.srv.world:/dev/vdb Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config Created osd(s) 2 on host 'node03.srv.world' # after few minutes, status turns to [HEALTH_OK] [root@node01 ~]# ceph -s
Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42
Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config
cluster:
id: a103b16a-99b8-11f0-ac94-5254000f7c42
health: HEALTH_OK
services:
mon: 3 daemons, quorum node01,node02,node03 (age 96s) [leader: node01]
mgr: node01.scbkhl(active, since 10m), standbys: node02.xqykao
osd: 3 osds: 3 up (since 15s), 3 in (since 26s)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 81 MiB used, 480 GiB / 480 GiB avail
pgs: 1 active+clean
|
| [4] | To remove OSD, run like follows. |
|
[root@node01 ~]# ceph osd tree Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.46857 root default -3 0.15619 host node01 0 hdd 0.15619 osd.0 up 1.00000 1.00000 -5 0.15619 host node02 1 hdd 0.15619 osd.1 up 1.00000 1.00000 -7 0.15619 host node03 2 hdd 0.15619 osd.2 up 1.00000 1.00000 # for example, remove OSD ID [2] [root@node01 ~]# ceph orch osd rm 2 Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config Scheduled OSD(s) for removal. VG/LV for the OSDs won't be zapped (--zap wasn't passed). Run the `ceph-volume lvm zap` command with `--destroy` against the VG/LV if you want them to be destroyed. # show removal status # completes if no entries are shown # takes many times [root@node01 ~]# ceph orch osd rm status Inferring fsid a103b16a-99b8-11f0-ac94-5254000f7c42 Inferring config /var/lib/ceph/a103b16a-99b8-11f0-ac94-5254000f7c42/mon.node01/config OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT 2 node03.srv.world draining 1 False False False 2025-09-25 02:52:41.773987 |
| Sponsored Link |
|
|