Ceph Jewel : Configure Ceph Cluster2015/12/10 |
Install Distributed File System "Ceph" to Configure Storage Cluster.
For example on here, Configure Cluster with 1 admin Node and 3 Storage Node like follows.
| +--------------------+ | +-------------------+ | [dlp.srv.world] |10.0.0.30 | 10.0.0.x| [ Client ] | | Ceph-Deploy +-----------+-----------+ | | | | | | +--------------------+ | +-------------------+ +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
[1] |
Add a user for Ceph admin on all Nodes.
It adds "cent" user on this example. |
[2] | Grant root priviledge to Ceph admin user just added above with sudo settings. And also install required packages. Furthermore, If Firewalld is running on all Nodes, allow SSH service. Set all of above on all Nodes. |
[root@dlp ~]#
echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph [root@dlp ~]# chmod 440 /etc/sudoers.d/ceph
[root@dlp ~]#
[root@dlp ~]# yum -y install epel-release yum-plugin-priorities \ https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm [root@dlp ~]# sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/ceph.repo
firewall-cmd --add-service=ssh --permanent [root@dlp ~]# firewall-cmd --reload |
[3] | On Monitor Node (Monitor Daemon), If Firewalld is running, allow required port. |
[root@dlp ~]# firewall-cmd --add-port=6789/tcp --permanent [root@dlp ~]# firewall-cmd --reload |
[4] | On Storage Nodes (Object Storage), If Firewalld is running, allow required ports. |
[root@dlp ~]# firewall-cmd --add-port=6800-7100/tcp --permanent [root@dlp ~]# firewall-cmd --reload |
[5] | Login as a Ceph admin user and configure Ceph. Set SSH key-pair from Ceph Admin Node (it's "dlp.srv.world" on this example) to all storage Nodes. |
[cent@dlp ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/cent/.ssh/id_rsa): Created directory '/home/cent/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/cent/.ssh/id_rsa. Your public key has been saved in /home/cent/.ssh/id_rsa.pub. The key fingerprint is: 54:c3:12:0e:d3:65:11:49:11:73:35:1b:e3:e8:63:5a cent@dlp.srv.world The key's randomart image is:
[cent@dlp ~]$
vi ~/.ssh/config # create new ( define all nodes and users ) Host dlp Hostname dlp.srv.world User cent Host node01 Hostname node01.srv.world User cent Host node02 Hostname node02.srv.world User cent Host node03 Hostname node03.srv.world User cent
[cent@dlp ~]$
chmod 600 ~/.ssh/config # transfer key file [cent@dlp ~]$ ssh-copy-id node01 cent@node01.srv.world's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node01'" and check to make sure that only the key(s) you wanted were added.[cent@dlp ~]$ ssh-copy-id node02 [cent@dlp ~]$ ssh-copy-id node03 |
[6] | Install Ceph to all Nodes from Admin Node. |
[cent@dlp ceph]$ ceph-deploy new node01
[cent@dlp ceph]$
vi ./ceph.conf # add to the end
osd pool default size = 2
# install Ceph on each Node [cent@dlp ceph]$ ceph-deploy install dlp node01 node02 node03
# settings for monitoring and keys [cent@dlp ceph]$ ceph-deploy mon create-initial |
[7] | Configure Ceph Cluster from Admin Node. |
# prepare Object Storage Daemon [cent@dlp ceph]$ ceph-deploy osd prepare node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd
# activate Object Storage Daemon [cent@dlp ceph]$ ceph-deploy osd activate node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd
# transfer config files [cent@dlp ceph]$ ceph-deploy admin dlp node01 node02 node03 [cent@dlp ceph]$ # show status (display like follows if no problem) [cent@dlp ceph]$ ceph health HEALTH_OK |
[8] | By the way, if you'd like to clean settings and re-configure again, do like follows. |
# remove packages [cent@dlp ceph]$ ceph-deploy purge dlp node01 node02 node03
# remove settings [cent@dlp ceph]$ ceph-deploy purgedata dlp node01 node02 node03 [cent@dlp ceph]$ ceph-deploy forgetkeys
|
Sponsored Link |
|