Ceph : Configure Cluster2018/12/25 |
Install Distributed File System Ceph to Configure Storage Cluster.
For example on here, Configure Cluster with 1 admin Node and 3 Storage Node like follows.
| +--------------------+ | +--------------------+ | [dlp.srv.world] |10.0.0.30 | 10.0.0.x| [client.srv.world] | | Ceph-Deploy +-----------+-----------+ | | | | | | +--------------------+ | +--------------------+ +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
[1] |
Add a user for Ceph admin on all Nodes.
It adds [ubuntu] user on this exmaple. |
[2] | Grant root priviledge to Ceph admin user just added above with sudo settings. And also install required packages. |
root@dlp:~#
root@dlp:~# apt -y install openssh-server python-ceph echo -e 'Defaults:ubuntu !requiretty\nubuntu ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph root@dlp:~# chmod 440 /etc/sudoers.d/ceph |
[3] | Login as a Ceph admin user and configure Ceph. Create SSH key-pair on Ceph Admin Node and send public-key to all storage Nodes. |
ubuntu@dlp:~$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa): Created directory '/home/ubuntu/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/ubuntu/.ssh/id_rsa. Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub. The key fingerprint is: SHA256:HJArjy33ATN7IvhbND2lfPQ7APZ1Av/hEy1qHg6gcE8 ubuntu@dlp.srv.world The key's randomart image is:
ubuntu@dlp:~$
vi ~/.ssh/config # create new ( define all nodes and users ) Host dlp Hostname dlp.srv.world User ubuntu Host node01 Hostname node01.srv.world User ubuntu Host node02 Hostname node02.srv.world User ubuntu Host node03 Hostname node03.srv.world User ubuntu
ubuntu@dlp:~$
chmod 600 ~/.ssh/config # transfer public-key ubuntu@dlp:~$ ssh-copy-id node01 ubuntu@node01.srv.world's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node01'" and check to make sure that only the key(s) you wanted were added.ubuntu@dlp:~$ ssh-copy-id node02 ubuntu@dlp:~$ ssh-copy-id node03 |
[5] | Install Ceph to all Nodes from Admin Node. |
ubuntu@dlp:~/ceph$
ceph-deploy new node01
# install Ceph on each Node ubuntu@dlp:~/ceph$ ceph-deploy install dlp node01 node02 node03
# settings for monitoring and keys ubuntu@dlp:~/ceph$ ceph-deploy mon create-initial
# configure manager node ubuntu@dlp:~/ceph$ ceph-deploy mgr create node01
|
[6] | Configure Ceph Cluster from Admin Node. |
# prepare Object Storage Daemon ubuntu@dlp:~/ceph$ ceph-deploy osd prepare node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd
# activate Object Storage Daemon ubuntu@dlp:~/ceph$ ceph-deploy osd activate node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd
# transfer config files ubuntu@dlp:~/ceph$ ceph-deploy admin dlp node01 node02 node03 # show status (no ploblem if HEALTH_OK) ubuntu@dlp:~/ceph$ sudo ceph status cluster: id: 8166a55f-be6a-41f4-ad8d-4f8b68cac788 health: HEALTH_OK services: mon: 1 daemons, quorum node01 mgr: node01(active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 3077 MB used, 27642 MB / 30720 MB avail pgs: |
[7] | By the way, if you'd like to clean settings and re-configure again, do like follows. |
# remove packages ubuntu@dlp:~/ceph$ ceph-deploy purge dlp node01 node02 node03
# remove settings ubuntu@dlp:~/ceph$ ceph-deploy purgedata dlp node01 node02 node03 ubuntu@dlp:~/ceph$ ceph-deploy forgetkeys
|
Sponsored Link |
|