CentOS Stream 9
Sponsored Link

Ceph Reef : Cephadm #2 Configure Cluster2023/08/21

 
Configure Ceph Cluster with [Cephadm] that is a Ceph Cluster Deploy tool.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes.
(use [/dev/sdb] on this example)
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]
[2] To add Nodes to Cluster, run like follows.
For example, add [node02], [node03].
# transfer SSH public key

[root@node01 ~]#
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node02


Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node02'"
and check to make sure that only the key(s) you wanted were added.

[root@node01 ~]#
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node03


Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node03'"
and check to make sure that only the key(s) you wanted were added.

# add target Nodes to Cluster

[root@node01 ~]#
ceph orch host add node02.srv.world

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
Added host 'node02.srv.world' with addr '10.0.0.52'
[root@node01 ~]#
ceph orch host add node03.srv.world

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
Added host 'node03.srv.world' with addr '10.0.0.53'

[root@node01 ~]#
ceph orch host ls

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
HOST              ADDR       LABELS  STATUS
node01.srv.world  10.0.0.51  _admin
node02.srv.world  10.0.0.52
node03.srv.world  10.0.0.53
3 hosts in cluster
[3] To configure OSD, run like follows.
For example, configure [node01], [node02], [node03].
# list available devices
# possible to set if [AVAIL = True]

[root@node01 ~]#
ceph orch device ls

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
HOST              PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REFRESHED  REJECT REASONS
node01.srv.world  /dev/vdb  hdd               160G  Yes        7m ago
node02.srv.world  /dev/vdb  hdd               160G  Yes        46s ago
node03.srv.world  /dev/vdb  hdd               160G  Yes        5s ago

# configure OSD

[root@node01 ~]#
ceph orch daemon add osd node01.srv.world:/dev/vdb

Inferring fsid 5bde1018-ec3b-11ec-9cfd-5254006ea317
Using recent ceph image quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a
Created osd(s) 0 on host 'node01.srv.world'
[root@node01 ~]#
ceph orch daemon add osd node02.srv.world:/dev/vdb

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
Created osd(s) 1 on host 'node02.srv.world'
[root@node01 ~]#
ceph orch daemon add osd node03.srv.world:/dev/vdb

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
Created osd(s) 2 on host 'node03.srv.world'

# after few minutes, status turns to [HEALTH_OK]

[root@node01 ~]#
ceph -s

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
  cluster:
    id:     cc27a71e-3fda-11ee-99e9-5254000f7c42
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 2m)
    mgr: node01.lvvxbq(active, since 8m), standbys: node02.sckvfa
    osd: 3 osds: 3 up (since 16s), 3 in (since 29s)

  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   80 MiB used, 480 GiB / 480 GiB avail
    pgs:     1 active+clean
[4] To remove OSD, run like follows.
[root@node01 ~]#
ceph osd tree

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.46857  root default
-3         0.15619      host node01
 0    hdd  0.15619          osd.0        up   1.00000  1.00000
-5         0.15619      host node02
 1    hdd  0.15619          osd.1        up   1.00000  1.00000
-7         0.15619      host node03
 2    hdd  0.15619          osd.2        up   1.00000  1.00000

# for example, remove OSD ID [2]

[root@node01 ~]#
ceph orch osd rm 2

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
Scheduled OSD(s) for removal.
VG/LV for the OSDs won't be zapped (--zap wasn't passed).
Run the `ceph-volume lvm zap` command with `--destroy` against the VG/LV if you want them to be destroyed.

# show removal status
# completes if no entries are shown
# takes many times

[root@node01 ~]#
ceph orch osd rm status

Inferring fsid cc27a71e-3fda-11ee-99e9-5254000f7c42
Inferring config /var/lib/ceph/cc27a71e-3fda-11ee-99e9-5254000f7c42/mon.node01/config
Using ceph image with id '14060fbd7be7' and tag 'v18' created on 2023-08-03 23:44:28 +0000 UTC
quay.io/ceph/ceph@sha256:bffa28055a8df508962148236bcc391ff3bbf271312b2e383c6aa086c086c82c
OSD  HOST              STATE     PGS  REPLACE  FORCE  ZAP    DRAIN STARTED AT
2    node03.srv.world  draining    1  False    False  False  2023-08-21 04:37:36.729845
Matched Content