Ceph Nautilus : Use as Block Device
2019/06/12 |
Configure Clients to use Ceph Storage like follows.
| +--------------------+ | +----------------------+ | [dlp.srv.world] |10.0.0.30 | 10.0.0.31| [client01.srv.world] | | Ceph-Ansible +-----------+-----------+ | | | | | | +--------------------+ | +----------------------+ +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
For exmaple, Create a block device and mount it on a Client.
|
|
[1] |
Next, Configure Client Node with Ansible Playbook liek follows. The Ansible Playbook is existing one when creating initial setup, refer to here. |
# create new
copy_admin_key: true
# add to the end [clients] client01.srv.world cd /usr/share/ceph-ansible [cent@dlp ceph-ansible]$ ansible-playbook site.yml --limit=clients ..... ..... PLAY RECAP ********************************************************************* client01.srv.world : ok=89 changed=12 unreachable=0 failed=0 skipped=184 rescued=0 ignored=0 INSTALLER STATUS *************************************************************** Install Ceph Client : Complete (0:02:24) ..... ..... |
[2] | Create a Block device and mount it on a Client Node. |
# create default RBD pool [cent@client01 ~]$ sudo ceph osd pool create rbd 8 pool 'rbd' created [cent@client01 ~]$ sudo rbd pool init rbd # for example, create a disk with 10G [cent@client01 ~]$ sudo rbd create rbd01 --size 10G --image-feature layering
# show list [cent@client01 ~]$ sudo rbd ls -l NAME SIZE PARENT FMT PROT LOCK rbd01 10 GiB 2 # show mapping [cent@client01 ~]$ rbd showmapped id pool namespace image snap device 0 rbd rbd01 - /dev/rbd0 # format with XFS [cent@client01 ~]$ sudo mkfs.xfs /dev/rbd0 meta-data=/dev/rbd0 isize=512 agcount=16, agsize=163840 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
[cent@client01 ~]$
[cent@client01 ~]$ sudo mount /dev/rbd0 /mnt df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 26G 1.5G 25G 6% / devtmpfs devtmpfs 3.9G 0 3.9G 0% /dev tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs tmpfs 3.9G 8.6M 3.9G 1% /run tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/vda1 xfs 1014M 250M 765M 25% /boot tmpfs tmpfs 799M 0 799M 0% /run/user/0 tmpfs tmpfs 799M 0 799M 0% /run/user/1000 /dev/rbd0 xfs 10G 33M 10G 1% /mnt |