Ceph Pacific : Use Block Device2023/06/19 |
Configure a Client Host [dlp] to use Ceph Storage like follows.
| +--------------------+ | | [dlp.srv.world] |10.0.0.30 | | Ceph Client +-----------+ | | | +--------------------+ | +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
For example, Create a block device and mount it on a Client Host.
|
|
[1] | Transfer SSH public key to Client Host and Configure it from Admin Node. |
# transfer public key root@node01:~# ssh-copy-id dlp # install required packages root@node01:~# ssh dlp "apt -y install ceph-common"
# transfer required files to Client Host root@node01:~# scp /etc/ceph/ceph.conf dlp:/etc/ceph/ ceph.conf 100% 273 343.7KB/s 00:00root@node01:~# scp /etc/ceph/ceph.client.admin.keyring dlp:/etc/ceph/ ceph.client.admin.keyring 100% 151 191.1KB/s 00:00root@node01:~# ssh dlp "chown ceph:ceph /etc/ceph/ceph.*" |
[2] | Create a Block device and mount it on a Client Host. |
# create default RBD pool [rbd] root@dlp:~# ceph osd pool create rbd 32 pool 'rbd' created # enable Placement Groups auto scale mode root@dlp:~# ceph osd pool set rbd pg_autoscale_mode on set pool 3 pg_autoscale_mode to on # initialize the pool root@dlp:~# rbd pool init rbd ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK device_health_metrics 0 3.0 479.9G 0.0000 1.0 1 on False rbd 19 3.0 479.9G 0.0000 1.0 32 on False # create a block device with 10G root@dlp:~# rbd create --size 10G --pool rbd rbd01
# confirm root@dlp:~# rbd ls -l NAME SIZE PARENT FMT PROT LOCK rbd01 10 GiB 2 # map the block device root@dlp:~# rbd map rbd01 /dev/rbd0 # confirm root@dlp:~# rbd showmapped id pool namespace image snap device 0 rbd rbd01 - /dev/rbd0 # format with EXT4 root@dlp:~# mkfs.ext4 /dev/rbd0 mke2fs 1.47.0 (5-Feb-2023) Discarding device blocks: done Creating filesystem with 2621440 4k blocks and 655360 inodes Filesystem UUID: b826aec2-064e-4bae-9d30-b37a3ec5ee15 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: doneroot@dlp:~# mount /dev/rbd0 /mnt root@dlp:~# df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 392M 572K 391M 1% /run /dev/mapper/debian--vg-root ext4 28G 1.4G 26G 6% / tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock /dev/vda1 ext2 455M 58M 373M 14% /boot tmpfs tmpfs 392M 0 392M 0% /run/user/0 /dev/rbd0 ext4 9.8G 24K 9.3G 1% /mnt |
[3] | For delete Block devices or Pools you created, run commands like follows. For deleting Pools, it needs to set [mon allow pool delete = true] on [Monitor Daemon] configuration. |
# unmap root@dlp:~# rbd unmap /dev/rbd/rbd/rbd01
# delete a block device root@dlp:~# rbd rm rbd01 -p rbd Removing image: 100% complete...done. # delete a pool # ceph osd pool delete [Pool Name] [Pool Name] *** root@dlp:~# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it pool 'rbd' removed |
Sponsored Link |
|