Ceph Squid : Cephadm #1 クラスターの設定2024/10/25 |
|
Ceph クラスター デプロイ ツール [Cephadm] を使用した Ceph クラスターの新規構築です。
当例では 3 台 のノードでクラスターを構成します。
3 台 のノードにはそれぞれ空きブロックデバイスがあることが前提です。 (当例では [/dev/sdb] を使用)
|
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| Manager Daemon | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
|
| [1] |
[Cephadm] から各ノードの制御に Python 3 が必要となります。
よって、全ノードに、こちらの [1] を参考に Python 3 をインストールしておきます。 |
| [2] | [Cephadm] はコンテナーベースで Ceph クラスターを構築します。 よって、全ノードに Podman をインストールしておきます。 |
|
[root@node01 ~]# dnf -y install podman |
| [3] | [Monitor Daemon], [Manager Daemon] を設定するノードで [Cephadm] をインストールします。 当例では [node01] で進めます。 |
|
[root@node01 ~]#
[root@node01 ~]# dnf -y install centos-release-ceph-squid epel-release python3-jinja2
dnf -y install cephadm
|
| [4] | Ceph クラスターを新規構築します。 |
|
[root@node01 ~]# mkdir -p /etc/ceph [root@node01 ~]# cephadm bootstrap --mon-ip 10.0.0.51 --allow-fqdn-hostname
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 5.2.3 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: f3c2c95c-9272-11ef-8253-5254000f7c42
Verifying IP 10.0.0.51 port 3300 ...
Verifying IP 10.0.0.51 port 6789 ...
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Mon IP `10.0.0.51` is in CIDR network `10.0.0.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v19...
.....
.....
firewalld ready
Ceph Dashboard is now available at:
URL: https://node01.srv.world:8443/
User: admin
Password: 1i16l09vrz
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/f3c2c95c-9272-11ef-8253-5254000f7c42/config directory
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid f3c2c95c-9272-11ef-8253-5254000f7c42 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/en/latest/mgr/telemetry/
Bootstrap complete.
# Ceph Cli を有効にする [root@node01 ~]# alias ceph='cephadm shell -- ceph' [root@node01 ~]# echo "alias ceph='cephadm shell -- ceph'" >> ~/.bash_profile
ceph -v Inferring fsid f3c2c95c-9272-11ef-8253-5254000f7c42 Inferring config /var/lib/ceph/f3c2c95c-9272-11ef-8253-5254000f7c42/mon.node01/config Using ceph image with id '37996728e013' and tag 'v19' created on 2024-09-27 22:08:21 +0000 UTC quay.io/ceph/ceph@sha256:200087c35811bf28e8a8073b15fa86c07cce85c575f1ccd62d1d6ddbfdc6770a ceph version 19.2.0 (16063ff2022298c9300e49a547a16ffda59baf13) squid (stable) # OSD 未設定のため 現時点では [HEALTH_WARN] で OK [root@node01 ~]# ceph -s
Inferring fsid f3c2c95c-9272-11ef-8253-5254000f7c42
Inferring config /var/lib/ceph/f3c2c95c-9272-11ef-8253-5254000f7c42/mon.node01/config
Using ceph image with id '37996728e013' and tag 'v19' created on 2024-09-27 22:08:21 +0000 UTC
quay.io/ceph/ceph@sha256:200087c35811bf28e8a8073b15fa86c07cce85c575f1ccd62d1d6ddbfdc6770a
cluster:
id: f3c2c95c-9272-11ef-8253-5254000f7c42
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum node01 (age 2m)
mgr: node01.kurtfw(active, since 82s)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
# 各役割サービスごとにコンテナーが起動 [root@node01 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8c387f544b6f quay.io/ceph/ceph:v19 -n mon.node01 -f ... 3 minutes ago Up 3 minutes ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-mon-node01 3a2f7dac8d90 quay.io/ceph/ceph:v19 -n mgr.node01.kur... 3 minutes ago Up 3 minutes ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-mgr-node01-kurtfw 229aae7f559e quay.io/ceph/ceph@sha256:200087c35811bf28e8a8073b15fa86c07cce85c575f1ccd62d1d6ddbfdc6770a -n client.ceph-ex... 2 minutes ago Up 2 minutes ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-ceph-exporter-node01 16751a2dda34 quay.io/ceph/ceph@sha256:200087c35811bf28e8a8073b15fa86c07cce85c575f1ccd62d1d6ddbfdc6770a -n client.crash.n... 2 minutes ago Up 2 minutes ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-crash-node01 428a50e85b31 quay.io/prometheus/node-exporter:v1.5.0 --no-collector.ti... 2 minutes ago Up 2 minutes ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-node-exporter-node01 093fb2342af9 quay.io/prometheus/prometheus:v2.43.0 --config.file=/et... 2 minutes ago Up 2 minutes ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-prometheus-node01 d2c8e8f85edb quay.io/prometheus/alertmanager:v0.25.0 --cluster.listen-... About a minute ago Up About a minute ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-alertmanager-node01 9add4fce7857 quay.io/ceph/grafana:9.4.12 About a minute ago Up About a minute ceph-f3c2c95c-9272-11ef-8253-5254000f7c42-grafana-node01 # 各コンテナー起動用の systemd サービスが起動 [root@node01 ~]# systemctl status ceph-* --no-pager
* ceph-f3c2c95c-9272-11ef-8253-5254000f7c42@crash.node01.service - Ceph crash.node01 for f3c2c95c-9272-11ef-8253-5254000f7c42
Loaded: loaded (/etc/systemd/system/ceph-f3c2c95c-9272-11ef-8253-5254000f7c42@.service; enabled; preset: disabled)
Active: active (running) since Fri 2024-10-25 10:47:32 JST; 3min 0s ago
Main PID: 8198 (conmon)
Tasks: 3 (limit: 23124)
Memory: 7.7M
CPU: 239ms
CGroup: /system.slice/system-ceph\x2df3c2c95c\x2d9272\x2d11ef\x2d8253\x2d5254000f7c42.slice/ceph-f3c2c95c-9272-11ef-8253-5254000f7c42@crash.node01.service
.....
.....
|
| Sponsored Link |
|
|