Mount a cluster file system
The example mounts the GFS2 (Global File System 2) cluster file system. For more information about configuring cluster resources and cluster failure behavior, see High Availability Add-On Overview in the Red Hat Enterprise documentation.
GFS2 (Global File System 2) is a clustered file system with shared data access. It provides the ability for multiple nodes to operate on the same file system simultaneously, providing consistency and high performance. Learn more about GFS2 in the Global File System 2 section of the Red Hat Enterprise documentation.
In the example, GFS2 is used to operate:
corosync
— is an inter-node communication service that provides message exchange between cluster nodes, controls their availability and determines the quorum — the minimum number of active nodes required for secure cluster operation;pacemaker
— A cluster resource manager that manages starting, stopping, and moving resources between cluster nodes in case one of them fails;dlm
— A distributed lock manager that coordinates in-cluster access to shared resources.
To mount a clustered file system:
- Connect a network disk to each server.
- Configure each node in the cluster.
- Mount the cluster file system.
1. Connect a network disk to each server
Use the instructions Connect a network drive to a dedicated server.
2. Configure each node in the cluster
To ensure that nodes in the cluster work together, configure each node.
-
Connect to the server via SSH or KVM console.
-
Open the
netplan
utility configuration file with thevi
text editor:vi /etc/netplan/50-cloud-init.yaml
-
On the network interface used to run the
corosync
service, add IP addresses from the private range.The servers must be able to access each other at these addresses.The iSCSI addresses used to connect the network disks andcorocync
must not overlap.<eth_name>:
addresses:
- <ip_address>Specify:
<eth_name>
— the name of the private network interface for the primary messaging channel between cluster nodes;<ip_address>
— the private IP address of the current node in the main cluster network.
-
Exit the
vi
text editor with your changes saved::wq
-
Apply the configuration:
netplan apply
-
Make sure that the network interfaces are configured correctly:
ip a
-
Install the components to configure the cluster environment:
apt install corosync pacemaker gfs2-utils pcs resource-agents ldmtool dlm-controld
-
Restart the server.
-
Open the
/etc/hosts
configuration file with thevi
text editor:vi /etc/hosts
-
Add IP addresses and host names to
/etc/hosts
:<ip_address_1> <node_name_1>
<ip_address_2> <node_name_2>Specify:
<ip_address_1>
— the primary IP address of the first node in the private network;<node_name_1>
— name of the first node on the private network, e.g.node-1
;<ip_address_2>
— the primary IP address of the second node in the private network;<node_name_2>
— name of the second node on the private network, e.g.node-2
.
-
Exit the
vi
text editor with your changes saved::wq
2. Mount a cluster file system
On one of the cluster nodes, configure the cluster and mount the cluster file system on the network disk.
-
Connect to the server via SSH or KVM console.
-
Make sure that the network interfaces are configured correctly:
ip a
-
Install the components to configure the cluster environment:
apt install corosync pacemaker gfs2-utils pcs resource-agents ldmtool dlm-controld
-
Restart the server.
-
Create a security key for
corosync
:corosync-keygen
The key file will be saved in the
/etc/corosync/authkey
directory. -
Distribute the key to the nodes in the cluster using the
scp
utility by running the command for each node:scp /etc/corosync/authkey root@<node_name>:/etc/corosync/authkey
Specify
<node_name>
is the name of the node on the private network that you specified when you configured the cluster node in step 10. -
Create a cluster:
pcs cluster setup <cluster_name> <node_name_1> <node_name_2>
Specify:
<cluster_name>
— cluster name;<node_name_1>
— the name of the current node on the private network, which you specified when configuring the cluster node in step 10;<node_name_2>
— the name of the second node on the private network that you specified when configuring the cluster node configuring the cluster node in step 10.
-
Start the
corosync
andpacemaker
services for all nodes in the cluster:pcs cluster start --all
-
Verify that the cluster has moved to the
online
status:pcs status
-
Make sure that the configuration file
/etc/corosync/corosync.conf
has the correct cluster settings:cat /etc/corosync/corosync.conf
The contents of the configuration file will appear in the response. For example:
totem {
version: 2
cluster_name: cluster_name
transport: knet
crypto_cipher: aes256
crypto_hash: sha256
}
nodelist {
node {
ring0_addr: node-1
name: node-1
nodeid: 1
}
node {
ring0_addr: node-2
name: node-2
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/corosync/corosync.log
to_syslog: yes
timestamp: on
}Here:
cluster_name
— the cluster name you specified in step 7;node-1
— name of the current node in the cluster network;node-2
— name of the second node in the cluster network.
-
Display information about network drives:
multipath -ll
The command output will display information about the devices. For example:
mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=10 status=active
| `- 8:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 9:0:0:0 sdd 8:48 active ready runningHere
mpatha
is the name of the network disk. -
Format the network drive to the GFS2 file system:
mkfs.gfs2 -p lock_dlm -t <cluster_name>:<cluster_volume_name> -j <number_of_cluster_nodes> /dev/mapper/<block_storage_name>
Specify:
<cluster_name>:<cluster_volume_name>
— the identifier of the GFS2 file system within the cluster, it consists of two values and must be no more than 16 characters in total, where:<cluster_name>
— the cluster name you specified in step 7;<cluster_volume_name>
— file system name;
<number_of_cluster_nodes>
— number of GFS2 file system logs, one log per cluster node;<block_storage_name>
— the name of the network drive you obtained in step 11.
-
Run
dlm
, a cluster locking mechanism:pcs resource start dlm
-
Configure a policy for how the cluster should behave when quorum is lost:
pcs property set no-quorum-policy=freeze
-
Create a mount point:
mkdir -p /mnt/<mount_point_name>
Specify
<mount_point_name>
— the name of the directory to which the cluster file system will be mounted. -
Create a share that the cluster will mount as a GFS2 file system on all nodes at startup:
crm configure primitive <resource_name> ocf:heartbeat:Filesystem device /dev/mapper/<block_storage_name> directory /mnt/<mount_point_name> fstype gfs2
Specify:
<resource_name>
— the unique name of the resource within the cluster;<block_storage_name>
— the name of the network drive you got in step 11;<mount_point_name>
— the name of the directory on the network drive that you created in step 15.
-
Verify that the cluster is working correctly:
crm status
The response will display information about the state of the cluster. For example:
Cluster Summary:
* Stack: corosync
* Current DC: node-1 (version 2.1.2-ada5c3b36e2) - partition with quorum
* Last updated: Mon Feb 10 11:58:13 2025
* Last change: Fri Feb 7 19:19:07 2025 by root via cibadmin on node-1
* 2 nodes configured
* 2 resource instances configured
Node List:
* Online: [ node-1 node-2 ]
Full List of Resources:
* dlm (ocf:pacemaker:controld): Started [ node-1 node-2 ]
* ClusterFS (ocf:heartbeat:Filesystem): Started [ node-1 node-2 ]Here:
- the
Current DC
line shows the cluster management node. Thepartition with quorum
state means that the cluster has reached quorum and is operating correctly; - the
Node List
block lists the nodes of the cluster. TheOnline
status means that the nodes are available and participating; - the
Full List of Resources
block shows the status of cluster resources.The Started
status means that resources have been successfully started on the specified cluster nodes.
- the
-
Verify that
corosync
has established a connection to the other nodes in the cluster:corosync-cfgtool -s
The response will show information about
corosync
network connections. For example:Local node ID 2, transport knet
LINK ID 0 udp
addr = 192.168.1.23
status:
nodeid: 1: connected
nodeid: 2: localhostHere, the
Status
block shows the connection status of each node in the cluster:nodeid: 1: connected
— node is available, connection established;nodeid: 2: localhost
— current node.
-
Verify that the
dlm
cluster resource is working correctly and that all cluster nodes are discovered:dlm_tool status
The response will display information about the status of the
dlm
resource. For example:cluster nodeid 2 quorate 1 ring seq 80 80
daemon now 234888 fence_pid 0
node 1 M add 630 rem 212 fail 60 fence 159 at 1 1738943226
node 2 M add 61 rem 0 fail 0 fence 0 at 0 0Here:
quorate 1
— the cluster has reached quorum;node 1
иnode 2
with the statusM
— nodes of the cluster are active and participating in the cluster.
-
Make sure that the network drive appears on the system and is mounted to the correct location:
lsblk
The response will show information about the disks and their mount points. For example:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
----
sdc 8:32 0 150G 0 disk
└─mpatha 252:0 0 150G 0 mpath /mnt/gfs
sdd 8:48 0 150G 0 disk
└─mpatha 252:0 0 150G 0 mpath /mnt/gfsHere:
sdc
,sdd
— network disks;mpatha
— multipath device;/mnt/gfs
— mount point of the GFS2 file system.
-
Make sure that the GFS2 file system is mounted correctly:
mount | grep gfs
The response will show information about the file system. For example:
/dev/mapper/mpatha on /mnt/gfs type gfs2 (rw,relatime,rgrplvb)
Here:
/dev/mapper/mpatha
— network disk that hosts the GFS2 file system;/mnt/gfs
— file system mount point;gfs2
— file system type.