Ceph Tentacle : Use File System2025/09/25 |
|
Configure a Client Host [dlp] to use Ceph Storage like follows.
|
+--------------------+ |
| [dlp.srv.world] |10.0.0.30 |
| Ceph Client +-----------+
| | |
+--------------------+ |
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| Manager Daemon | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
|
|
For example, mount as Filesystem on a Client Host. |
|
| [1] | Transfer SSH public key to Client Host and Configure it from Admin Node. |
|
# transfer public key [root@node01 ~]# ssh-copy-id dlp # install required packages [root@node01 ~]# ssh dlp "dnf -y install centos-release-ceph-tentacle; dnf --enablerepo=crb -y install ceph-fuse"
# transfer required files to Client Host [root@node01 ~]# scp /etc/ceph/ceph.conf dlp:/etc/ceph/ ceph.conf 100% 195 98.1KB/s 00:00[root@node01 ~]# scp /etc/ceph/ceph.client.admin.keyring dlp:/etc/ceph/ ceph.client.admin.keyring 100% 151 71.5KB/s 00:00[root@node01 ~]# ssh dlp "chown ceph:ceph /etc/ceph/ceph.*" |
| [2] | Configure MDS (MetaData Server) on a Node. Configure it on [node01] Node on this example. |
|
# create directory
# directory name ⇒ (Cluster Name)-(Node Name) [root@node01 ~]# mkdir -p /var/lib/ceph/mds/ceph-node01 [root@node01 ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node01/keyring --gen-key -n mds.node01 creating /var/lib/ceph/mds/ceph-node01/keyring [root@node01 ~]# chown -R ceph:ceph /var/lib/ceph/mds/ceph-node01 [root@node01 ~]# ceph auth add mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node01/keyring added key for mds.node01 [root@node01 ~]# systemctl enable --now ceph-mds@node01 |
| [3] | Create 2 RADOS pools for Data and MeataData on MDS Node. Refer to the official documents to specify the end number (64 on the example below) ⇒ http://docs.ceph.com/docs/master/rados/operations/placement-groups/ |
|
[root@node01 ~]# ceph osd pool create cephfs_data 32 pool 'cephfs_data' created [root@node01 ~]# ceph osd pool create cephfs_metadata 32 pool 'cephfs_metadata' created [root@node01 ~]# ceph osd pool set cephfs_data bulk true set pool 8 bulk to true [root@node01 ~]# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 9 and data pool 8 [root@node01 ~]# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] [root@node01 ~]# ceph mds stat cephfs:1 {0=node01=up:active} [root@node01 ~]# ceph fs status cephfs
cephfs - 0 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active node01 Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 129k 151G
cephfs_data data 0 151G
MDS version: ceph version 20.1.0 (010a3ad647c9962d47812a66ad6feda26ab28aa4) tentacle (rc - RelWithDebInfo)
|
| [4] | Mount CephFS on a Client Host. |
|
# Base64 encode client key [root@dlp ~]# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key [root@dlp ~]# chmod 600 admin.key
mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key [root@dlp ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 4.0M 0 4.0M 0% /dev tmpfs tmpfs 1.8G 0 1.8G 0% /dev/shm tmpfs tmpfs 731M 8.6M 723M 2% /run /dev/mapper/cs-root xfs 26G 3.0G 24G 12% / /dev/vda1 xfs 1014M 327M 688M 33% /boot tmpfs tmpfs 366M 0 366M 0% /run/user/0 10.0.0.51:6789:/ ceph 152G 0 152G 0% /mnt |
| [5] | For delete CephFS or Pools you created, run commands like follows. For deleting Pools, it needs to set [mon allow pool delete = true] on [Monitor Daemon] configuration. |
|
# delete CephFS [root@node01 ~]# ceph fs rm cephfs --yes-i-really-mean-it
# delete pools # ceph osd pool delete [Pool Name] [Pool Name] *** [root@node01 ~]# ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it [root@node01 ~]# ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it |
| Sponsored Link |
|
|