CentOS Stream 9
Sponsored Link

Pacemaker : Add or Remove Nodes
2023/12/05
 
Add new nodes to an existing cluster.
As an example, add [node03] to the cluster like follows.
                       +--------------------+
                       | [  ISCSI Target  ] |
                       |    dlp.srv.world   |
                       +----------+---------+
                         10.0.0.30|
                                  |
+----------------------+          |          +----------------------+
| [  Cluster Node#1  ] |10.0.0.51 | 10.0.0.52| [  Cluster Node#2  ] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
+----------------------+          |          +----------------------+
                                  |
                                  |10.0.0.53
                      +-----------------------+
                      | [  Cluster Node#3  ]  |
                      +   node03.srv.world    |
                      +-----------------------+

[1]

Install Pacemaker on new Node, refer to [1], [2] of here.

[2] Add a new node to an existing cluster.
[root@node01 ~]#
pcs status

Cluster name: ha_cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: node01.srv.world (version 2.1.6-10.1.el9-6fdc9deea29) - partition with quorum
  * Last updated: Tue Dec  5 14:37:00 2023 on node01.srv.world
  * Last change:  Tue Dec  5 14:33:19 2023 by root via cibadmin on node01.srv.world
  * 2 nodes configured
  * 1 resource instance configured

Node List:
  * Online: [ node01.srv.world node02.srv.world ]

Full List of Resources:
  * scsi-shooter        (stonith:fence_scsi):    Started node01.srv.world

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

# authorize new node

[root@node01 ~]#
pcs host auth node03.srv.world

Username: hacluster
Password:
node03.srv.world: Authorized

# add new node

[root@node01 ~]#
pcs cluster node add node03.srv.world

No addresses specified for host 'node03.srv.world', using 'node03.srv.world'
Disabling sbd...
node03.srv.world: sbd disabled
Sending 'corosync authkey', 'pacemaker authkey' to 'node03.srv.world'
node03.srv.world: successful distribution of the file 'corosync authkey'
node03.srv.world: successful distribution of the file 'pacemaker authkey'
Sending updated corosync.conf to nodes...
node01.srv.world: Succeeded
node02.srv.world: Succeeded
node03.srv.world: Succeeded
node01.srv.world: Corosync configuration reloaded
[3] Update setting of Fence Device.
If SCSI fencing is configured for the fence device as in this example, log in to the shared storage for the fence device on the newly added node and install the SCSI fence agent ([2], [3] ).
Then update the fencing device configuration as follows.
# update fencing device list

[root@node01 ~]#
pcs stonith update scsi-shooter pcmk_host_list="node01.srv.world node02.srv.world node03.srv.world"

[root@node01 ~]#
pcs stonith config scsi-shooter

Resource: scsi-shooter (class=stonith type=fence_scsi)
  Attributes: scsi-shooter-instance_attributes
    devices=/dev/disk/by-id/wwn-0x6001405f89df433c1ce4390afc6e0bad
    pcmk_host_list="node01.srv.world node02.srv.world node03.srv.world"
  Meta Attributes: scsi-shooter-meta_attributes provides=unfencing
  Operations: monitor: scsi-shooter-monitor-interval-60s interval=60s
[4] If you have already configured resources in your existing cluster, you need configure them for each resource so that the newly added node can successfully become active in the event of a failover.
For example, if you have configured LVM shared storage as shown here, you will need to make the newly added node aware of its LVM shared storage beforehand.
[root@node03 ~]#
iscsiadm -m discovery -t sendtargets -p 10.0.0.30

10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target01
10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target02
[root@node03 ~]#
iscsiadm -m node --login --target iqn.2022-01.world.srv:dlp.target02
[root@node03 ~]#
iscsiadm -m session -o show

tcp: [1] 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target01 (non-flash)
tcp: [2] 10.0.0.30:3260,1 iqn.2022-01.world.srv:dlp.target02 (non-flash)
[root@node03 ~]#
lvmdevices --adddev /dev/sdb1

[root@node03 ~]#
lvm pvscan --cache --activate ay

  pvscan[17327] PV /dev/vda2 online, VG cs is complete.
  pvscan[17327] PV /dev/sdb1 ignore foreign VG.
  pvscan[17327] VG cs run autoactivation.
  2 logical volume(s) in volume group "cs" now active
[5]

If you have already configured resources in your existing cluster, you need configure them for each resource so that the newly added node can successfully become active in the event of a failover.
For example, if you are configuring Apache httpd as shown here, you will need to configure [1] section in the link target on the newly added node.

[6] After completing all settings for each resource, start the cluster service on the newly added node.
# start cluster services

[root@node01 ~]#
pcs cluster start node03.srv.world

node03.srv.world: Starting Cluster...
[root@node01 ~]#
pcs cluster enable node03.srv.world

node03.srv.world: Cluster Enabled
[root@node01 ~]#
pcs status

Cluster name: ha_cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: node02.srv.world (version 2.1.6-10.1.el9-6fdc9deea29) - partition with quorum
  * Last updated: Tue Dec  5 15:31:11 2023 on node01.srv.world
  * Last change:  Tue Dec  5 15:30:11 2023 by hacluster via crmd on node02.srv.world
  * 3 nodes configured
  * 5 resource instances configured

Node List:
  * Online: [ node01.srv.world node02.srv.world node03.srv.world ]

Full List of Resources:
  * scsi-shooter        (stonith:fence_scsi):    Started node01.srv.world
  * Resource Group: ha_group:
    * lvm_ha    (ocf:heartbeat:LVM-activate):    Started node01.srv.world
    * httpd_fs  (ocf:heartbeat:Filesystem):      Started node01.srv.world
    * httpd_vip (ocf:heartbeat:IPaddr2):         Started node01.srv.world
    * website   (ocf:heartbeat:apache):  Started node01.srv.world

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[7] Run fencing and verify that it successfully fails over to the newly added node.
[root@node03 ~]#
pcs stonith fence node01.srv.world

Node: node01.srv.world fenced
[root@node03 ~]#
pcs status

Cluster name: ha_cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: node03.srv.world (version 2.1.6-10.1.el9-6fdc9deea29) - partition with quorum
  * Last updated: Tue Dec  5 16:18:09 2023 on node01.srv.world
  * Last change:  Tue Dec  5 16:15:44 2023 by hacluster via crmd on node03.srv.world
  * 3 nodes configured
  * 5 resource instances configured

Node List:
  * Online: [ node01.srv.world node02.srv.world node03.srv.world ]

Full List of Resources:
  * scsi-shooter        (stonith:fence_scsi):    Started node03.srv.world
  * Resource Group: ha_group:
    * lvm_ha    (ocf:heartbeat:LVM-activate):    Started node03.srv.world
    * httpd_fs  (ocf:heartbeat:Filesystem):      Started node03.srv.world
    * httpd_vip (ocf:heartbeat:IPaddr2):         Started node03.srv.world
    * website   (ocf:heartbeat:apache):  Started node03.srv.world

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[8] To delete a node, run like follows.
[root@node01 ~]#
pcs status

Cluster name: ha_cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: node02.srv.world (version 2.1.6-10.1.el9-6fdc9deea29) - partition with quorum
  * Last updated: Tue Dec  5 15:31:11 2023 on node01.srv.world
  * Last change:  Tue Dec  5 15:30:11 2023 by hacluster via crmd on node02.srv.world
  * 3 nodes configured
  * 5 resource instances configured

Node List:
  * Online: [ node01.srv.world node02.srv.world node03.srv.world ]

Full List of Resources:
  * scsi-shooter        (stonith:fence_scsi):    Started node01.srv.world
  * Resource Group: ha_group:
    * lvm_ha    (ocf:heartbeat:LVM-activate):    Started node01.srv.world
    * httpd_fs  (ocf:heartbeat:Filesystem):      Started node01.srv.world
    * httpd_vip (ocf:heartbeat:IPaddr2):         Started node01.srv.world
    * website   (ocf:heartbeat:apache):  Started node01.srv.world

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root@node01 ~]#
pcs cluster node remove node03.srv.world

Destroying cluster on hosts: 'node03.srv.world'...
node03.srv.world: Successfully destroyed cluster
Sending updated corosync.conf to nodes...
node01.srv.world: Succeeded
node02.srv.world: Succeeded
node01.srv.world: Corosync configuration reloaded

# update fencing device list

[root@node01 ~]#
pcs stonith update scsi-shooter pcmk_host_list="node01.srv.world node02.srv.world"
[root@node01 ~]#
pcs status

Cluster name: ha_cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: node02.srv.world (version 2.1.6-10.1.el9-6fdc9deea29) - partition with quorum
  * Last updated: Tue Dec  5 16:23:21 2023 on node01.srv.world
  * Last change:  Tue Dec  5 16:22:57 2023 by root via cibadmin on node01.srv.world
  * 2 nodes configured
  * 5 resource instances configured

Node List:
  * Online: [ node01.srv.world node02.srv.world ]

Full List of Resources:
  * scsi-shooter        (stonith:fence_scsi):    Started node01.srv.world
  * Resource Group: ha_group:
    * lvm_ha    (ocf:heartbeat:LVM-activate):    Started node01.srv.world
    * httpd_fs  (ocf:heartbeat:Filesystem):      Started node01.srv.world
    * httpd_vip (ocf:heartbeat:IPaddr2):         Started node01.srv.world
    * website   (ocf:heartbeat:apache):  Started node01.srv.world

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
Matched Content