Home >Operation and Maintenance >Linux Operation and Maintenance >How to configure a highly available cluster file system on Linux
How to configure a highly available cluster file system on Linux
Introduction:
In the computer field, high availability (high availability) is a technology that aims to improve the reliability and availability of the system. . In a cluster environment, a highly available file system is one of the important components to ensure continuous operation of the system. This article will introduce how to configure a highly available cluster file system on Linux and give corresponding code examples.
On Ubuntu, you can use the following command to install the package:
sudo apt-get install pacemaker corosync drbd8-utils gfs2-utils
sudo nano /etc/hosts
Add the following content:
192.168.1.100 node1 192.168.1.101 node2
Create Corosync configuration file.
sudo nano /etc/corosync/corosync.conf
Add the following:
totem { version: 2 secauth: off cluster_name: mycluster transport: udpu } nodelist { node { ring0_addr: node1 nodeid: 1 } node { ring0_addr: node2 nodeid: 2 } } quorum { provider: corosync_votequorum } logging { to_syslog: yes to_logfile: yes logfile: /var/log/corosync.log debug: off timestamp: on }
sudo systemctl enable corosync sudo systemctl enable pacemaker
Start the service.
sudo systemctl start corosync sudo systemctl start pacemaker
Create DRBD configuration file.
sudo nano /etc/drbd.d/myresource.res
Add the following:
resource myresource { protocol C; on node1 { device /dev/drbd0; disk /dev/sdb; address 192.168.1.100:7789; meta-disk internal; } on node2 { device /dev/drbd0; disk /dev/sdb; address 192.168.1.101:7789; meta-disk internal; } net { allow-two-primaries; } startup { wfc-timeout 15; degr-wfc-timeout 60; } syncer { rate 100M; al-extents 257; } on-node-upgraded { # promote node1 to primary after a successful upgrade if [ "$(cat /proc/sys/kernel/osrelease)" != "$TW_AFTER_MAJOR.$TW_AFTER_MINOR.$TW_AFTER_UP" ] && [ "$(cat /proc/mounts | grep $DRBD_DEVICE)" = "" ] ; then /usr/bin/logger "DRBD on-node-upgraded handler: Promoting to primary after upgrade."; /usr/sbin/drbdsetup $DRBD_DEVICE primary; fi; } }
sudo drbdadm create-md myresource
Start DRBD.
sudo systemctl start drbd
sudo mkfs.gfs2 -p lock_gulmd -t mycluster:myresource /dev/drbd0
sudo mkdir /mnt/mycluster sudo mount -t gfs2 /dev/drbd0 /mnt/mycluster
sudo pcs resource create myresource Filesystem device="/dev/drbd0" directory="/mnt/mycluster" fstype="gfs2" op start timeout="60s" op stop timeout="60s" op monitor interval="10s" op monitor timeout="20s" op monitor start-delay="5s" op monitor stop-delay="0s"
sudo pcs constraint order myresource-clone then start myresource sudo pcs constraint colocation add myresource with myresource-clone
sudo pcs cluster stop node1
sudo mount | grep "/mnt/mycluster"
The output should be the address and mount point of the standby node.
sudo pcs cluster start node1
sudo mount | grep "/mnt/mycluster"
The output should be the address and mount point of the master node.
Conclusion:
Configuring a highly available cluster file system can improve the reliability and availability of the system. This article describes how to configure a highly available cluster file system on Linux and provides corresponding code examples. Readers can configure and adjust appropriately according to their own needs to achieve higher availability.
The above is the detailed content of How to configure a highly available cluster file system on Linux. For more information, please follow other related articles on the PHP Chinese website!