Ceph Octopus : Use File System2020/08/31 |
Configure a Client Host [dlp] to use Ceph Storage like follows.
| +--------------------+ | | [dlp.srv.world] |10.0.0.30 | | Ceph Client +-----------+ | | | +--------------------+ | +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
For example, mount as Filesystem on a Client Host.
|
|
[1] | Transfer SSH public key to Client Host and Configure it from Admin Node. |
# transfer public key root@node01:~# ssh-copy-id dlp # install required packages root@node01:~# ssh dlp "apt -y install ceph-fuse"
# transfer required files to Client Host root@node01:~# scp /etc/ceph/ceph.conf dlp:/etc/ceph/ ceph.conf 100% 195 98.1KB/s 00:00root@node01:~# scp /etc/ceph/ceph.client.admin.keyring dlp:/etc/ceph/ ceph.client.admin.keyring 100% 151 71.5KB/s 00:00root@node01:~# ssh dlp "chown ceph. /etc/ceph/ceph.*" |
[2] | Configure MDS (MetaData Server) on a Node. Configure it on [node01] Node on this example. |
# create directory # directory name ⇒ (Cluster Name)-(Node Name) root@node01:~# mkdir -p /var/lib/ceph/mds/ceph-node01 root@node01:~# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node01/keyring --gen-key -n mds.node01 creating /var/lib/ceph/mds/ceph-node01/keyring root@node01:~# chown -R ceph. /var/lib/ceph/mds/ceph-node01 root@node01:~# ceph auth add mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node01/keyring added key for mds.node01 root@node01:~# systemctl enable --now ceph-mds@node01 |
[3] | Create 2 RADOS pools for Data and MeataData on MDS Node. Refer to the official documents to specify the end number (64 on the example below) ⇒ http://docs.ceph.com/docs/master/rados/operations/placement-groups/ |
root@node01:~# ceph osd pool create cephfs_data 64 pool 'cephfs_data' created root@node01:~# ceph osd pool create cephfs_metadata 64 pool 'cephfs_metadata' created root@node01:~# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 4 and data pool 3 root@node01:~# ceph fs ls ame: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] root@node01:~# ceph mds stat cephfs:1 {0=node01=up:active} root@node01:~# ceph fs status cephfs cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS 0 active node01 Reqs: 0 /s 10 13 POOL TYPE USED AVAIL cephfs_metadata metadata 1536k 74.9G cephfs_data data 0 74.9G MDS version: ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable) |
[4] | Mount CephFS on a Client Host. |
# Base64 encode client key root@dlp:~# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key root@dlp:~# chmod 600 admin.key
mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key root@dlp:~# df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 394M 1.1M 393M 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv ext4 25G 3.1G 21G 14% / tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/loop0 squashfs 55M 55M 0 100% /snap/core18/1880 /dev/vda2 ext4 976M 197M 713M 22% /boot /dev/loop1 squashfs 56M 56M 0 100% /snap/core18/1885 /dev/loop2 squashfs 72M 72M 0 100% /snap/lxd/16100 /dev/loop3 squashfs 71M 71M 0 100% /snap/lxd/16926 /dev/loop4 squashfs 30M 30M 0 100% /snap/snapd/8542 /dev/loop5 squashfs 30M 30M 0 100% /snap/snapd/8790 tmpfs tmpfs 394M 0 394M 0% /run/user/0 10.0.0.51:6789:/ ceph 75G 0 75G 0% /mnt |
Sponsored Link |
|