Pacemaker : CLVM + GFS22015/07/08 |
Configure Storage cluster with CLVM + GFS2.
This example is based on the environment like follows. +--------------------+ | [ Shared Storage ] | | ISCSI Target | +---------+----------+ 10.0.0.30| | +----------------------+ | +----------------------+ | [ Node01 ] |10.0.0.51 | 10.0.0.52| [ Node02 ] | | node01.srv.world +----------+----------+ node02.srv.world | | CLVM | | CLVM | +----------------------+ +----------------------+ |
[1] | |
[2] |
Create iscsi shared satorages, refer to here.
It needs 2 shared storage devices, for data and for fence device.
This example uses "iqn.2015-07.world.server:storage.target01" for data, and "iqn.2015-07.world.server:fence.target00" for fence device.
|
[3] |
Configure ISCSI initiater on all Nodes, refer to here.
It's OK not to create partitions on them yet. |
[4] | Install required packages on all Nodes. |
[root@node01 ~]#
[root@node01 ~]# yum -y install fence-agents-all lvm2-cluster gfs2-utils lvmconf --enable-cluster [root@node01 ~]# reboot |
[5] | Configure fence device. It's OK to set on a node. The example "/dev/sda" below is just the device of shared storage. |
# confirm fence device disk (it is set on sda on this example) [root@node01 ~]# cat /proc/partitions major minor #blocks name ..... ..... 253 2 1048576 dm-2 8 0 1048576 sda 8 16 20971520 sdb # confirm disk's ID [root@node01 ~]# ll /dev/disk/by-id | grep sda lrwxrwxrwx 1 root root 9 Jul 10 11:44 scsi-36001405189b893893594dffb3a2cb3e9 -> ../../sda lrwxrwxrwx 1 root root 9 Jul 10 11:44 wwn-0x6001405189b893893594dffb3a2cb3e9 -> ../../sda
[root@node01 ~]#
[root@node01 ~]# pcs stonith create scsi-shooter fence_scsi devices=/dev/disk/by-id/wwn-0x6001405189b893893594dffb3a2cb3e9 meta provides=unfencing
pcs property set no-quorum-policy=freeze [root@node01 ~]# pcs stonith show scsi-shooter Resource: scsi-shooter (class=stonith type=fence_scsi) Attributes: devices=/dev/disk/by-id/wwn-0x6001405189b893893594dffb3a2cb3e9 Meta Attrs: provides=unfencing Operations: monitor interval=60s (scsi-shooter-monitor-interval-60s) |
[6] | Add required resources. It's OK to set on a node. |
[root@node01 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true [root@node01 ~]# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true [root@node01 ~]# pcs constraint order start dlm-clone then clvmd-clone Adding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@node01 ~]# pcs constraint colocation add clvmd-clone with dlm-clone [root@node01 ~]# pcs status resources Clone Set: dlm-clone [dlm] Started: [ node01.srv.world node02.srv.world ] Clone Set: clvmd-clone [clvmd] Started: [ node01.srv.world node02.srv.world ] |
[7] | Create volumes on shared storage and format with GFS2. It's OK to set on a node. On this example, it is set on sdb and create partitions on it and set LVM type with fdisk. |
[root@node01 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created # create cluster volume group [root@node01 ~]# vgcreate -cy vg_cluster /dev/sdb1 Clustered volume group "vg_cluster" successfully created [root@node01 ~]# lvcreate -l100%FREE -n lv_cluster vg_cluster Logical volume "lv_cluster" created. [root@node01 ~]# mkfs.gfs2 -p lock_dlm -t ha_cluster:gfs2 -j 2 /dev/vg_cluster/lv_cluster /dev/vg_cluster/lv_cluster is a symbolic link to /dev/dm-3 This will destroy any data on /dev/dm-3 Are you sure you want to proceed? [y/n] y Device: /dev/vg_cluster/lv_cluster Block size: 4096 Device size: 0.99 GB (260096 blocks) Filesystem size: 0.99 GB (260092 blocks) Journals: 2 Resource groups: 5 Locking protocol: "lock_dlm" Lock table: "ha_cluster:gfs2" UUID: cdda1b15-8c57-67a1-481f-4ad3bbeb1b2f |
[8] | Add shared storage to cluster resource. It's OK to set on a node. |
[root@node01 ~]#
[root@node01 ~]# pcs resource create fs_gfs2 Filesystem \
device="/dev/vg_cluster/lv_cluster" directory="/mnt" fstype="gfs2" \ options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true pcs resource show Clone Set: dlm-clone [dlm] Started: [ node01.srv.world ] Stopped: [ node02.srv.world ] Clone Set: clvmd-clone [clvmd] Started: [ node01.srv.world ] Stopped: [ node02.srv.world ] Clone Set: fs_gfs2-clone [fs_gfs2] Started: [ node01.srv.world ][root@node01 ~]# pcs constraint order start clvmd-clone then fs_gfs2-clone Adding clvmd-clone fs_gfs2-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@node01 ~]# pcs constraint colocation add fs_gfs2-clone with clvmd-clone [root@node01 ~]# pcs constraint show Location Constraints: Ordering Constraints: start dlm-clone then start clvmd-clone (kind:Mandatory) start clvmd-clone then start fs_gfs2-clone (kind:Mandatory) Colocation Constraints: clvmd-clone with dlm-clone (score:INFINITY) fs_gfs2-clone with clvmd-clone (score:INFINITY) |
[9] | It's OK all. Make sure GFS2 filesystem is mounted on an active node and also make sure GFS2 mounts will move to another node if current active node will be down. |
[root@node01 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 27G 1.1G 26G 4% / devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev tmpfs tmpfs 2.0G 76M 1.9G 4% /dev/shm tmpfs tmpfs 2.0G 8.4M 2.0G 1% /run tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/vda1 xfs 497M 126M 371M 26% /boot /dev/mapper/vg_cluster-lv_cluster gfs2 1016M 259M 758M 26% /mnt |
Sponsored Link |
|