Ceph Luminous : Use as File System2017/10/08 |
Configure Clients to use Ceph Storage like follows.
| +--------------------+ | +-------------------+ | [dlp.srv.world] |10.0.0.30 | 10.0.0.x| [ Client ] | | Ceph-Deploy +-----------+-----------+ | | | | | | +--------------------+ | +-------------------+ +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
For example, mount as Filesystem on a Client.
|
|
[1] | Create MDS (MetaData Server) on a Node which you'd like to set MDS. It sets to node01 on this example. |
[cent@dlp ceph]$ ceph-deploy mds create node01 |
[2] | Create at least 2 RADOS pools on MDS Node and activate MetaData Server.
For pg_num which is specified at the end of a creating command, refer to official document and decide appropriate value. ⇒ http://docs.ceph.com/docs/master/rados/operations/placement-groups/ |
[cent@node01 ~]$ # create pools [cent@node01 ~]$ ceph osd pool create cephfs_data 128 pool 'cephfs_data' created [cent@node01 ~]$ ceph osd pool create cephfs_metadata 128 pool 'cephfs_metadata' created # enable pools [cent@node01 ~]$ ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 2 and data pool 1 # show list [cent@node01 ~]$ ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] [cent@node01 ~]$ ceph mds stat e4: 1/1/1 up {0=node01=up:creating} |
[3] | Mount CephFS on a Client. |
[root@client ~]#
yum -y install ceph-fuse # get admin key [root@client ~]# ssh cent@node01.srv.world "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key cent@node01.srv.world's password: [root@client ~]# chmod 600 admin.key
mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key [root@client ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 26G 1.9G 25G 7% / devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 2.0G 8.4M 2.0G 1% /run tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/vda1 xfs 1014M 231M 784M 23% /boot tmpfs tmpfs 396M 0 396M 0% /run/user/0 10.0.0.51:6789:/ ceph 78G 21G 58G 27% /mnt |
Sponsored Link |
|