CentOS 8
Sponsored Link

Ceph Octopus : Cephadm #2 Configure Cluster2020/07/08

 
Configure Ceph Cluster with [Cephadm] that is a Ceph Cluster Deploy tool.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes.
(use [/dev/sdb] on this example)
                                         |
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|     Manager Daemon    |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]
[2] To add Nodes to Cluster, run like follows.
For example, add [node02], [node03].
# transfer SSH public key

[root@node01 ~]#
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node02


Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node02'"
and check to make sure that only the key(s) you wanted were added.

[root@node01 ~]#
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node03


Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node03'"
and check to make sure that only the key(s) you wanted were added.

# add target Nodes to Cluster

[root@node01 ~]#
ceph orch host add node02

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
Added host 'node02'
[root@node01 ~]#
ceph orch host add node03

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
Added host 'node03'

[root@node01 ~]#
ceph orch host ls

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
HOST    ADDR    LABELS  STATUS
node01  node01
node02  node02
node03  node03
[3] To configure OSD, run like follows.
For example, configure [node01], [node02], [node03].
# list available devices

# possible to set if [AVAIL = True]

[root@node01 ~]#
ceph orch device ls

HOST    PATH      TYPE   SIZE  DEVICE  AVAIL  REJECT REASONS
node01  /dev/sdb  hdd   80.0G          True
node01  /dev/sda  hdd   30.0G          False  Insufficient space (<5GB) on vgs, locked, LVM detected
node02  /dev/sdb  hdd   80.0G          True
node02  /dev/sda  hdd   30.0G          False  locked, Insufficient space (<5GB) on vgs, LVM detected
node03  /dev/sdb  hdd   80.0G          True
node03  /dev/sda  hdd   30.0G          False  LVM detected, Insufficient space (<5GB) on vgs, locked

# configure OSD

[root@node01 ~]#
ceph orch daemon add osd node01:/dev/sdb

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
Created osd(s) 0 on host 'node01'
[root@node01 ~]#
ceph orch daemon add osd node02:/dev/sdb

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
Created osd(s) 1 on host 'node02'
[root@node01 ~]#
ceph orch daemon add osd node03:/dev/sdb

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
Created osd(s) 2 on host 'node03'

# after few minutes, status turns to [HEALTH_OK]

[root@node01 ~]#
ceph -s

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
  cluster:
    id:     998fbdaa-c00d-11ea-9083-52540067a927
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 15m)
    mgr: node01.yzylhr(active, since 18m), standbys: node03.bylgui
    osd: 3 osds: 3 up (since 2m), 3 in (since 2m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 237 GiB / 240 GiB avail
    pgs:     1 active+clean
[4] To remove OSD, run like follows.
[root@node01 ~]#
ceph osd tree

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.23428  root default
-3         0.07809      host node01
 0    hdd  0.07809          osd.0        up   1.00000  1.00000
-5         0.07809      host node02
 1    hdd  0.07809          osd.1        up   1.00000  1.00000
-7         0.07809      host node03
 2    hdd  0.07809          osd.2        up   1.00000  1.00000

# for example, remove OSD ID [2]

[root@node01 ~]#
ceph orch osd rm 2

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
Scheduled OSD(s) for removal

# show removal status

# completes if no entries are shown

# takes many times

[root@node01 ~]#
ceph orch osd rm status

INFO:cephadm:Inferring fsid 998fbdaa-c00d-11ea-9083-52540067a927
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
NAME  HOST   PGS STARTED_AT
osd.2 node03  1  2020-07-07 06:40:46.617879
Matched Content