Ceph : Configure Client2014/06/11 |
How to use Ceph Cluster Storage from Clients.
| +------------------+ | +-----------------+ | [ Admin Node ] |10.0.0.30 | 10.0.0.31| [ Client PC ] | | Ceph-Deploy |-----------+-----------| | | Meta Data Server | | | | +------------------+ | +-----------------+ | +---------------------------+--------------------------+ | | | |10.0.0.80 |10.0.0.81 |10.0.0.82 +-------+----------+ +--------+---------+ +--------+---------+ | [ Ceph Node #1 ] | | [ Ceph Node #2 ] | | [ Ceph Node #3 ] | | Monitor Daemon +-------+ Monitor Daemon +-------+ Monitor Daemon | | Object Storage | | Object Storage | | Object Storage | +------------------+ +------------------+ +------------------+ |
[1] | For example, mount as a block device on Admin Node. |
# create an image-file with 5G trusty@ceph-mds:~$ rbd create disk01 --size 5120
# confirm trusty@ceph-mds:~$ rbd ls -l NAME SIZE PARENT FMT PROT LOCK disk01 5120M 1 # map the image-file with a device trusty@ceph-mds:~$ sudo rbd map disk01
# confirm trusty@ceph-mds:~$ rbd showmapped id pool image snap device 1 rbd disk01 - /dev/rbd1 # format with ext4 trusty@ceph-mds:~$ sudo mkfs.ext4 /dev/rbd1
# mount it trusty@ceph-mds:~$ sudo mount /dev/rbd1 /mnt
df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/temp--vg-root 26G 1.3G 23G 6% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 2.0G 4.0K 2.0G 1% /dev tmpfs 396M 316K 396M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 0 2.0G 0% /run/shm none 100M 0 100M 0% /run/user /dev/vda1 236M 36M 188M 16% /boot /dev/rbd1 4.8G 10M 4.6G 1% /mnt # just mounted |
[2] | For example, mount as a filesystem by CephFS from a ClientPC. |
root@dlp:~#
apt-get -y install ceph-common ceph-fs-common # get Admin key from Admin Node trusty@dlp:~$ ssh ceph-mds.srv.world "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key trusty@ceph-mds.srv.world's password: trusty@dlp:~$ chmod 600 admin.key
trusty@dlp:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/temp--vg-root 26G 1.2G 23G 5% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 2.0G 4.0K 2.0G 1% /dev tmpfs 396M 304K 396M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 0 2.0G 0% /run/shm none 100M 0 100M 0% /run/user /dev/vda1 236M 36M 188M 16% /boot 10.0.0.81:6789:/ 76G 23G 54G 30% /mnt # just mounted |
Sponsored Link |
|