Pacemaker : Set Cluster Resource (NFS)2020/02/24 |
Set NFS Cluster Resource and Configure Active/Passive NFS Server.
This example is based on the environment like follows.
1) Bacic Cluster setting is done 2) Fence Divice is set 3) LVM shared storage is set +--------------------+ | [ ISCSI Target ] | | storage.srv.world | +----------+---------+ 10.0.0.50| | +----------------------+ | +----------------------+ | [ Cluster Node#1 ] |10.0.0.51 | 10.0.0.52| [ Cluster Node#2 ] | | node01.srv.world +----------+----------+ node02.srv.world | | NFS Server | | | NFS Server | +----------------------+ | +----------------------+ vip:10.0.0.100 | +----------+---------+ | [ NFS Clients ] | | | +--------------------+ |
[1] | On all Cluster Nodes, if Firewalld is running, allow NFS service. |
[root@node01 ~]#
firewall-cmd --add-service=nfs --permanent success # for NFSv3 [root@node01 ~]# firewall-cmd --add-service={nfs3,mountd,rpc-bind} --permanent success firewall-cmd --reload success |
[2] | On a Node in Cluster, Add NFS resource. [/dev/vg_ha/lv_ha] on the example below is LVM shared storage. |
[root@node01 ~]# pcs status Cluster name: ha_cluster Stack: corosync Current DC: node01.srv.world (version 2.0.2-3.el8_1.2-744a30d655) - partition with quorum Last updated: Fri Feb 20 01:51:35 2020 Last change: Fri Feb 20 01:50:18 2020 by root via cibadmin on node01.srv.world 2 nodes configured 2 resources configured Online: [ node01.srv.world node02.srv.world ] Full list of resources: scsi-shooter (stonith:fence_scsi): Started node01.srv.world Resource Group: ha_group lvm_ha (ocf::heartbeat:LVM-activate): Started node01.srv.world Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled # set Filesystem resource # [nfs_share] ⇒ any name # [device=***] ⇒ shared storage # [directory=***] ⇒ mount point # [--group ***] ⇒ set in the same group with shared storage [root@node01 ~]# pcs resource create nfs_share ocf:heartbeat:Filesystem device=/dev/vg_ha/lv_ha directory=/home/nfs-share fstype=ext4 --group ha_group
pcs status Cluster name: ha_cluster Stack: corosync Current DC: node01.srv.world (version 2.0.2-3.el8_1.2-744a30d655) - partition with quorum Last updated: Fri Feb 20 01:31:41 2020 Last change: Fri Feb 20 01:31:35 2020 by root via cibadmin on node02.srv.world 2 nodes configured 3 resources configured Online: [ node01.srv.world node02.srv.world ] Full list of resources: scsi-shooter (stonith:fence_scsi): Started node01.srv.world Resource Group: ha_group lvm_ha (ocf::heartbeat:LVM-activate): Started node01.srv.world nfs_share (ocf::heartbeat:Filesystem): Started node01.srv.world Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled # mount automatically on a node that resources started [root@node01 ~]# df -hT /home/nfs-share Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_ha-lv_ha ext4 9.8G 37M 9.3G 1% /home/nfs-share # set nfsserver resource # [nfs_daemon] ⇒ any name # [nfs_shared_infodir=***] ⇒ specify a directory that NFS server related files are located [root@node01 ~]# pcs resource create nfs_daemon ocf:heartbeat:nfsserver nfs_shared_infodir=/home/nfs-share/nfsinfo nfs_no_notify=true --group ha_group
# set IPaddr2 resource # virtual IP address clients access to NFS service [root@node01 ~]# pcs resource create nfs_vip ocf:heartbeat:IPaddr2 ip=10.0.0.100 cidr_netmask=24 --group ha_group
# set nfsnotify resource # [source_host=***] ⇒ same one with vip above [root@node01 ~]# pcs resource create nfs_notify ocf:heartbeat:nfsnotify source_host=10.0.0.100 --group ha_group
pcs status Cluster name: ha_cluster Stack: corosync Current DC: node01.srv.world (version 2.0.2-3.el8_1.2-744a30d655) - partition with quorum Last updated: Fri Feb 20 01:50:55 2020 Last change: Fri Feb 20 01:50:36 2020 by root via cibadmin on node01.srv.world 2 nodes configured 6 resources configured Online: [ node01.srv.world node02.srv.world ] Full list of resources: scsi-shooter (stonith:fence_scsi): Started node01.srv.world Resource Group: ha_group lvm_ha (ocf::heartbeat:LVM-activate): Started node01.srv.world nfs_share (ocf::heartbeat:Filesystem): Started node01.srv.world nfs_daemon (ocf::heartbeat:nfsserver): Started node01.srv.world nfs_vip (ocf::heartbeat:IPaddr2): Started node01.srv.world nfs_notify (ocf::heartbeat:nfsnotify): Started node01.srv.world Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled |
[3] | On an active Node NFS filesystem mounted, set exportfs setting. |
# set exportfs resource # [nfs_root] ⇒ any name # [clientspec=*** options=*** directory=***] ⇒ exports setting # [fsid=0] ⇒ root point on NFSv4 [root@node01 ~]# pcs resource create nfs_root ocf:heartbeat:exportfs clientspec=10.0.0.0/255.255.255.0 options=rw,sync,no_root_squash directory=/home/nfs-share/nfs-root fsid=0 --group ha_group # set exportfs resource [root@node01 ~]# pcs resource create nfs_share01 ocf:heartbeat:exportfs clientspec=10.0.0.0/255.255.255.0 options=rw,sync,no_root_squash directory=/home/nfs-share/nfs-root/share01 fsid=1 --group ha_group
pcs status Cluster name: ha_cluster Stack: corosync Current DC: node01.srv.world (version 2.0.2-3.el8_1.2-744a30d655) - partition with quorum Last updated: Fri Feb 20 01:51:48 2020 Last change: Fri Feb 20 01:51:42 2020 by root via cibadmin on node01.srv.world 2 nodes configured 8 resources configured Online: [ node01.srv.world node02.srv.world ] Full list of resources: scsi-shooter (stonith:fence_scsi): Started node01.srv.world Resource Group: ha_group lvm_ha (ocf::heartbeat:LVM-activate): Started node01.srv.world nfs_share (ocf::heartbeat:Filesystem): Started node01.srv.world nfs_daemon (ocf::heartbeat:nfsserver): Started node01.srv.world nfs_vip (ocf::heartbeat:IPaddr2): Started node01.srv.world nfs_notify (ocf::heartbeat:nfsnotify): Started node01.srv.world nfs_root (ocf::heartbeat:exportfs): Started node01.srv.world nfs_share01 (ocf::heartbeat:exportfs): Started node01.srv.world Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled[root@node01 ~]# showmount -e Export list for node01.srv.world: /home/nfs-share/nfs-root 10.0.0.0/255.255.255.0 /home/nfs-share/nfs-root/share01 10.0.0.0/255.255.255.0 |
[4] | Verify settings to access to virtual IP address with NFS from any client computer. |
# mount with NFSv4 [root@client ~]# mount -t nfs4 10.0.0.100:share01 /mnt [root@client ~]# df -hT /mnt Filesystem Type Size Used Avail Use% Mounted on 10.0.0.100:/share01 nfs4 9.8G 36M 9.3G 1% /mnt # mount with NFSv3 [root@client ~]# mount -t nfs 10.0.0.100:/home/nfs-share/nfs-root/share01 /mnt [root@client ~]# df -T /mnt Filesystem Type 1K-blocks Used Available Use% Mounted on 10.0.0.100:/home/nfs-share/nfs-root/share01 nfs 10239488 36864 9662464 1% /mnt |
Sponsored Link |
|