Pacemaker : Set LVM Shared Storage2021/06/11 |
Configure Active/Passive HA-LVM (High Availability LVM) volume in Cluster.
This example is based on the environment like follows.
Before this setting, Configure basic settings of Cluster and set Fence device first. +--------------------+ | [ ISCSI Target ] | | dlp.srv.world | +---------+----------+ 10.0.0.30| | +----------------------+ | +----------------------+ | [ Cluster Node#1 ] |10.0.0.51 | 10.0.0.52| [ Cluster Node#2 ] | | node01.srv.world +----------+----------+ node02.srv.world | | | | | +----------------------+ +----------------------+ |
[1] |
Create a storage for share on ISCSI Target, refer to here.
On this example, it created ISCSI storage as IQN [iqn.2021-06.world.srv:dlp.target02] with [10G] size. |
[2] | |
[3] | On all Cluster Nodes, Change LVM System ID. |
[root@node01 ~]#
vi /etc/lvm/lvm.conf # line 1235 : change system_id_source = " uname "
|
[4] | On a Node in Cluster, Set LVM on shared storage. [sdb] on the example below is shared storage from ISCSI Target. |
# set LVM [root@node01 ~]# parted --script /dev/sdb "mklabel gpt" [root@node01 ~]# parted --script /dev/sdb "mkpart primary 0% 100%" [root@node01 ~]# parted --script /dev/sdb "set 1 lvm on"
# create physical volume [root@node01 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created. # create volume group [root@node01 ~]# vgcreate vg_ha /dev/sdb1 Volume group "vg_ha" successfully created with system ID node01.srv.world # confirm the value of [System ID] equals the value of [$ uname -n] [root@node01 ~]# vgs -o+systemid VG #PV #LV #SN Attr VSize VFree System ID cs 1 2 0 wz--n- <29.00g 0 vg_ha 1 0 0 wz--n- <9.98g <9.98g node01.srv.world # create logical volume [root@node01 ~]# lvcreate -l 100%FREE -n lv_ha vg_ha Logical volume "lv_ha" created. # format with ext4 [root@node01 ~]# mkfs.ext4 /dev/vg_ha/lv_ha
# deactivate volume group [root@node01 ~]# vgchange vg_ha -an 0 logical volume(s) in volume group "vg_ha" now active |
[5] | On other Nodes except a Node of [4], Scan LVM volumes to find new volume. |
[root@node02 ~]# lvm pvscan --cache --activate ay pvscan[1739] PV /dev/vda2 online, VG cl is complete. pvscan[1739] PV /dev/sdb1 ignore foreign VG. pvscan[1739] VG cl run autoactivation. 2 logical volume(s) in volume group "cs" now active |
[6] | On a Node of [4], Set shared storage as a Cluster resource. |
# [lvm_ha] ⇒ any name # [vgname=***] ⇒ volume group name # [--group] ⇒ any name [root@node01 ~]# pcs resource create lvm_ha ocf:heartbeat:LVM-activate vgname=vg_ha vg_access_mode=system_id --group ha_group # confirm status # OK if LVM resource is [Started] [root@node01 ~]# pcs status Cluster name: ha_cluster Cluster Summary: * Stack: corosync * Current DC: node01.srv.world (version 2.0.5-9.el8_4.1-ba59be7122) - partition with quorum * Last updated: Fri Jun 11 01:22:02 2021 * Last change: Fri Jun 11 01:19:52 2021 by root via cibadmin on node01.srv.world * 2 nodes configured * 2 resource instances configured Node List: * Online: [ node01.srv.world node02.srv.world ] Full List of Resources: * scsi-shooter (stonith:fence_scsi): Started node01.srv.world * Resource Group: ha_group: * lvm_ha (ocf::heartbeat:LVM-activate): Started node02.srv.world Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled |
Sponsored Link |
|