Ceph Nautilus : Configure Ceph Cluster
2019/06/12 |
Install Distributed File System Ceph to Configure Storage Cluster.
For example on here, Configure Cluster with 1 admin Node and 3 Storage Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes. (on this example, it uses [/dev/sdb] for it) | +--------------------+ | | [dlp.srv.world] |10.0.0.30 | | Ceph-Ansible +-----------+ | | | +--------------------+ | +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
[1] |
Add a user for Ceph admin on all Nodes. (except [ceph] user, it's used by System)
It adds [cent] user on this exmaple. |
[2] | Grant root priviledge to Ceph admin user just added above with sudo settings. Furthermore, If Firewalld is running on all Nodes, allow SSH service. Set all of above on all Nodes. |
[root@dlp ~]#
[root@dlp ~]# echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph [root@dlp ~]# chmod 440 /etc/sudoers.d/ceph firewall-cmd --add-service=ssh --permanent [root@dlp ~]# firewall-cmd --reload |
[3] | Login as a Ceph admin user and configure Ceph. Set SSH key-pair from Ceph Admin Node (it's [dlp.srv.world] on this example) to all storage Nodes. |
[cent@dlp ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/cent/.ssh/id_rsa): Created directory '/home/cent/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/cent/.ssh/id_rsa. Your public key has been saved in /home/cent/.ssh/id_rsa.pub. The key fingerprint is: SHA256:bIzF6NAyEzpeV2CHOrr0JBgttu95RslBQ9bEDooXVbA cent@dlp.srv.world The key's randomart image is:
[cent@dlp ~]$
vi ~/.ssh/config # create new ( define all nodes and users ) Host dlp Hostname dlp.srv.world User cent Host node01 Hostname node01.srv.world User cent Host node02 Hostname node02.srv.world User cent Host node03 Hostname node03.srv.world User cent
[cent@dlp ~]$
chmod 600 ~/.ssh/config # transfer key file [cent@dlp ~]$ ssh-copy-id node01 cent@node01.srv.world's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node01'" and check to make sure that only the key(s) you wanted were added.[cent@dlp ~]$ ssh-copy-id node02 [cent@dlp ~]$ ssh-copy-id node03 |
[4] | Install required Repository Packages on admin Node and then, Install Ceph-Ansible to configure Ceph Cluster. |
[cent@dlp ~]$
[cent@dlp ~]$ |
[5] | Configure Ceph Cluster on admin Node to use Ceph-Ansible. |
# create new ceph_origin: repository ceph_repository: community ceph_repository_type: cdn ceph_stable_release: nautilus fetch_directory: ~/ceph-ansible-keys # set network interface for monitoring monitor_interface: eth0 # specify your public network public_network: 10.0.0.0/24 # specify cluster network # if it's the same with public network, set like follows # if not the same, specify your cluster network cluster_network: "{{ public_network }}" # create new # specify devices for saving data on Storage Node devices: - /dev/sdb # add to the end # specify Ceph admin user for SSH and Sudo [all:vars] ansible_ssh_user=cent ansible_become=true ansible_become_method=sudo ansible_become_user=root # set Monitor Daemon Node [mons] node01.srv.world # set Manager Daemon Node [mgrs] node01.srv.world # set OSD (Object Storage Daemon) Node [osds] node01.srv.world node02.srv.world node03.srv.world # run Playbook to setup Ceph Cluster [cent@dlp ~]$ cd /usr/share/ceph-ansible [cent@dlp ceph-ansible]$ [cent@dlp ceph-ansible]$ ansible-playbook site.yml ..... ..... PLAY RECAP ********************************************************************* node01.srv.world : ok=243 changed=16 unreachable=0 failed=0 skipped=328 rescued=0 ignored=0 node02.srv.world : ok=114 changed=18 unreachable=0 failed=0 skipped=184 rescued=0 ignored=0 node03.srv.world : ok=114 changed=18 unreachable=0 failed=0 skipped=172 rescued=0 ignored=0 INSTALLER STATUS *************************************************************** Install Ceph Monitor : Complete (0:00:35) Install Ceph Manager : Complete (0:01:22) Install Ceph OSD : Complete (0:03:25) Wednesday 12 June 2019 19:27:08 +0900 (0:00:00.114) 0:06:02.874 ******** =============================================================================== ceph-common : install redhat ceph packages ---------------------------- 116.31s ceph-mgr : install ceph-mgr packages on RedHat or SUSE ----------------- 55.59s ceph-common : install yum plugin priorities ---------------------------- 15.26s ceph-osd : use ceph-volume lvm batch to create bluestore osds ---------- 11.37s ceph-common : install centos dependencies ------------------------------ 10.25s gather and delegate facts ----------------------------------------------- 4.55s ceph-osd : apply operating system tuning -------------------------------- 3.27s ceph-common : configure red hat ceph community repository stable key ---- 2.47s ceph-common : configure red hat ceph community repository stable key ---- 2.23s ceph-common : configure red hat ceph community repository stable key ---- 1.95s ceph-mon : copy keys to the ansible server ------------------------------ 1.87s ceph-infra : open monitor and manager ports ----------------------------- 1.85s ceph-mon : fetch ceph initial keys -------------------------------------- 1.74s ceph-validate : validate provided configuration ------------------------- 1.74s ceph-infra : open osd ports --------------------------------------------- 1.53s ceph-mon : create ceph mgr keyring(s) ----------------------------------- 1.51s ceph-handler : check if the ceph mon socket is in-use ------------------- 1.35s ceph-osd : copy ceph key(s) if needed ----------------------------------- 1.21s ceph-common : install yum plugin priorities ----------------------------- 1.18s ceph-config : generate ceph configuration file: ceph.conf --------------- 1.17s # show state (OK if [HEALTH_OK]) [cent@dlp ceph-ansible]$ ssh node01 "ceph --version" ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable) ssh node01 "sudo ceph -s" cluster: id: 2865e8b4-2037-4154-b89f-9bf8a9e03c56 health: HEALTH_OK services: mon: 1 daemons, quorum node01 (age 36m) mgr: node01(active, since 28m) osd: 3 osds: 3 up (since 24m), 3 in (since 24m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 234 GiB / 237 GiB avail pgs: |