Ceph Nautilus : Use as File System
2019/06/12 |
Configure Clients to use Ceph Storage like follows.
| +--------------------+ | +----------------------+ | [dlp.srv.world] |10.0.0.30 | 10.0.0.31| [client01.srv.world] | | Ceph-Ansible +-----------+-----------+ | | | | | | +--------------------+ | +----------------------+ +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
For example, mount as Filesystem on a Client.
|
|
[1] | Create MDS (MetaData Server) on a Node which you'd like to set MDS. It sets to [node01] on this exmaple. The Ansible Playbook is existing one when creating initial setup, refer to here. |
# add to the end [mdss] node01.srv.world cd /usr/share/ceph-ansible [cent@dlp ceph-ansible]$ ansible-playbook site.yml --limit=mdss ..... ..... PLAY RECAP ********************************************************************* node01.srv.world : ok=328 changed=15 unreachable=0 failed=0 skipped=393 rescued=0 ignored=0 INSTALLER STATUS *************************************************************** Install Ceph Monitor : Complete (0:00:38) Install Ceph Manager : Complete (0:00:29) Install Ceph OSD : Complete (0:00:37) Install Ceph MDS : Complete (0:01:03) ..... ..... # show state [cent@node01 ~]$ sudo ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[cent@node01 ~]$
[cent@node01 ~]$ sudo ceph mds stat cephfs:1 {0=node01=up:active} sudo ceph fs status cephfs cephfs - 0 clients ====== +------+--------+--------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+--------+---------------+-------+-------+ | 0 | active | node01 | Reqs: 0 /s | 10 | 13 | +------+--------+--------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 1536k | 74.0G | | cephfs_data | data | 0 | 74.0G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ +-------------+ MDS version: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable) |
[2] | Mount CephFS on a Client Node. |
[root@client01 ~]#
yum -y install ceph-fuse #get admin key [root@client01 ~]# ssh cent@node01.srv.world "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key cent@node01.srv.world's password: [root@client01 ~]# chmod 600 admin.key
mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key [root@client01 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 26G 1.6G 25G 7% / devtmpfs devtmpfs 3.9G 0 3.9G 0% /dev tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs tmpfs 3.9G 8.6M 3.9G 1% /run tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/vda1 xfs 1014M 250M 765M 25% /boot tmpfs tmpfs 799M 0 799M 0% /run/user/0 10.0.0.51:6789:/ ceph 75G 0 75G 0% /mnt |