Pacemaker : Set Cluster Resource (NFS)2025/10/09 |
|
Set NFS Cluster Resource and Configure Active/Passive NFS Server.
This example is based on the environment like follows.
+--------------------+
| [ ISCSI Target ] |
| dlp.srv.world |
+----------+---------+
10.0.0.30|
|
+----------------------+ | +----------------------+
| [ Cluster Node#1 ] |10.0.0.51 | 10.0.0.52| [ Cluster Node#2 ] |
| node01.srv.world +----------+----------+ node02.srv.world |
| NFS Server | | | NFS Server |
+----------------------+ | +----------------------+
vip:10.0.0.60
|
+----------+---------+
| [ NFS Clients ] |
| |
+--------------------+
|
| [1] | On all Cluster Nodes, Install NFS tools. |
|
root@node01:~# apt -y install nfs-kernel-server nfs-common
|
| [2] | On a Node that LVM shared storage is active in Cluster, Add NFS resource. [/dev/vg_ha/lv_ha] on the example below is LVM shared storage. |
|
# current status root@node01:~# pcs status
Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node01.srv.world (version 3.0.0-3.0.0) - partition with quorum
* Last updated: Thu Oct 9 09:48:40 2025 on node01.srv.world
* Last change: Thu Oct 9 09:46:10 2025 by root via root on node01.srv.world
* 2 nodes configured
* 2 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
# set Filesystem resource # [nfs_share] : any name # [device=***] : shared storage # [directory=***] : mount point # [group ***] : set in the same group with shared storage root@node01:~# pcs resource create nfs_share ocf:heartbeat:Filesystem device=/dev/vg_ha/lv_ha directory=/home/nfs-share fstype=ext4 group ha_group --agent-validation --future
pcs status
Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node01.srv.world (version 3.0.0-3.0.0) - partition with quorum
* Last updated: Thu Oct 9 09:51:47 2025 on node01.srv.world
* Last change: Thu Oct 9 09:51:39 2025 by root via root on node01.srv.world
* 2 nodes configured
* 3 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* nfs_share (ocf:heartbeat:Filesystem): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
# mount automatically on a node that resources started root@node01:~# df -hT /home/nfs-share Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_ha-lv_ha ext4 9.8G 2.1M 9.3G 1% /home/nfs-share # set nfsserver resource # [nfs_daemon] : any name # [nfs_shared_infodir=***] : specify a directory that NFS server related files are located root@node01:~# pcs resource create nfs_daemon ocf:heartbeat:nfsserver nfs_shared_infodir=/home/nfs-share/nfsinfo nfs_no_notify=true group ha_group --future
# set IPaddr2 resource # virtual IP address clients access to NFS service root@node01:~# pcs resource create nfs_vip ocf:heartbeat:IPaddr2 ip=10.0.0.60 cidr_netmask=24 group ha_group --future
pcs status
Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node01.srv.world (version 3.0.0-3.0.0) - partition with quorum
* Last updated: Thu Oct 9 09:53:45 2025 on node01.srv.world
* Last change: Thu Oct 9 09:53:38 2025 by root via root on node01.srv.world
* 2 nodes configured
* 5 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* nfs_share (ocf:heartbeat:Filesystem): Started node01.srv.world
* nfs_daemon (ocf:heartbeat:nfsserver): Started node01.srv.world
* nfs_vip (ocf:heartbeat:IPaddr2): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
|
| [3] | On an active Node NFS filesystem mounted, set exportfs setting. |
|
# set exportfs resource # [nfs_root] : any name # [clientspec=*** options=*** directory=***] : exports setting # [fsid=0] : root point on NFSv4 root@node01:~# pcs resource create nfs_root ocf:heartbeat:exportfs clientspec=10.0.0.0/255.255.255.0 options=rw,sync,no_root_squash directory=/home/nfs-share/nfs-root fsid=0 group ha_group --future # set exportfs resource root@node01:~# pcs resource create nfs_share01 ocf:heartbeat:exportfs clientspec=10.0.0.0/255.255.255.0 options=rw,sync,no_root_squash directory=/home/nfs-share/nfs-root/share01 fsid=1 group ha_group --future
pcs status
Cluster name: ha_cluster
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: node01.srv.world (version 3.0.0-3.0.0) - partition with quorum
* Last updated: Thu Oct 9 09:55:41 2025 on node01.srv.world
* Last change: Thu Oct 9 09:55:35 2025 by root via root on node01.srv.world
* 2 nodes configured
* 7 resource instances configured
Node List:
* Online: [ node01.srv.world node02.srv.world ]
Full List of Resources:
* scsi-shooter (stonith:fence_scsi): Started node01.srv.world
* Resource Group: ha_group:
* lvm_ha (ocf:heartbeat:LVM-activate): Started node01.srv.world
* nfs_share (ocf:heartbeat:Filesystem): Started node01.srv.world
* nfs_daemon (ocf:heartbeat:nfsserver): Started node01.srv.world
* nfs_vip (ocf:heartbeat:IPaddr2): Started node01.srv.world
* nfs_root (ocf:heartbeat:exportfs): Started node01.srv.world
* nfs_share01 (ocf:heartbeat:exportfs): Started node01.srv.world
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
root@node01:~# showmount -e Export list for node01.srv.world: /home/nfs-share/nfs-root 10.0.0.0/255.255.255.0 /home/nfs-share/nfs-root/share01 10.0.0.0/255.255.255.0 |
| [4] | Verify settings to access to virtual IP address with NFS from any client computer. |
|
root@client:~# mount -t nfs4 10.0.0.60:share01 /mnt root@client:~# df -hT /mnt Filesystem Type Size Used Avail Use% Mounted on 10.0.0.60:/share01 nfs4 9.8G 2.0M 9.3G 1% /mnt |
| Sponsored Link |
|
|