In the previous two guides, we’ve discussed how to install cluster, creating a cluster and adding nodes to the cluster, also we’ve studied how cluster.conf appears to be after the necessary configurations are done.
Today, in this third part of clustering series, we are going to discuss about what is fencing, failover and how to configure them in our setup.
First of all let’s see what is meant by Fencing and Failover.
What is Fencing?
If we think of a setup with more than one nodes, it is possible that one or more nodes can be failed at some point of time. So in this case fencing is isolating the malfunctioning server from the cluster in order to protect and secure the synced resources. Therefore we can add a fence to protect the resources shared within the cluster.
What is Failover?
Imagine a scenario, where a server has important data for an organization which the stakeholders need the organization to keep the server up and running without any down time experienced. In this case we can duplicate the data to another server (now there are two servers with identical data and specs) which we can use as the fail-over.
By any chance, one of the servers goes down, the other server which we have configured as the fail-over will take over the load and provides the services which were given by the first server. In this method, users will not be experienced the down time period which was caused to the primary server.
You can go through the Part 01 and Part 02 of this clustering series here:
- What is Clustering and Advantages/Disadvantages – Part 1
- Setup Cluster with Two Nodes in Linux – Part 2
As we’ve already discussed about our testing environment setup in last two articles, that we’re using three servers for this setup, the first server act as a Cluster server and other two as nodes.
Cluster Server: 172.16.1.250 Hostname: clserver.test.net node01: 172.16.1.222 Hostname: nd01server.test.net node02: 172.16.1.223 Hostname: nd02server.test.net
Step 1: How to Add Fencing to Cluster Server
1. First we have to enable fencing on the cluster server, for this I will use below two commands.
# ccs -h 172.16.1.250 --setfencedaemon post_fail_delay=0 # ccs -h 172.16.1.250 --setfencedaemon post_join_delay=10
As you can see we use ccs command to add the configurations to cluster. Following are definitions of the options I have used in the command.
- -h: Cluster host IP address.
- –setfencedaemon: Applies the changes to the fencing daemon.
- post_fail_delay: Time in seconds which the daemon waits before fencing a victim server when a node has been failed.
- post_join_delay: Time in seconds which the daemon waits before fencing victim server when a node has joined the cluster.
2. Now let’s add a fence device for our cluster, execute below command to add a fence device.
# ccs -h 172.16.1.250 --addfencedev tecmintfence agent=fence_virt
This is how I executed the command and how the
cluster.conf file looks like after adding a fence device.
You can execute below command to see what kind of fence options you can use to create a fence device. I used fence_virt since I use VMs for my setup.
# ccs -h 172.16.1.250 --lsfenceopts
Step 2: Add Two Nodes to Fence Device
3. Now I’m going to add a method to the created fence device and add hosts in to it.
# ccs -h 172.16.1.250 --addmethod Method01 172.16.1.222 # ccs -h 172.16.1.250 --addmethod Method01 172.16.1.223
You have to add the methods you have created while ago for the both nodes you have in your setup. Following is how I added methods and my cluster.conf.
4. As the next step, you will have to add the fence methods you created for the both nodes, to the fence device we created namely “tecmintfence”.
# ccs -h 172.16.1.250 --addfenceinst tecmintfence 172.16.1.222 Method01 # ccs -h 172.16.1.250 --addfenceinst tecmintfence 172.16.1.223 Method01
I have successfully associated my methods with the fence device and this is how my cluster.conf looks like now.
Now you have successfully configured fence device, methods and added your nodes to it. As the last step of part 03, I will now show you how to add a failover to the setup.
Step 3: Add Failover to Cluster Server
5. I use below syntax of commands to create my fail-over to the cluster setup.
# ccs -h 172.16.1.250 --addfailoverdomain tecmintfod ordered
6. As you have created the fail-over domain, now you can add two nodes to it.
# ccs -h 172.16.1.250 --addfailoverdomainnode tecmintfod 172.16.1.222 1 # ccs -h 172.16.1.250 --addfailoverdomainnode tecmintfod 172.16.1.223 2
As it is shown above, you can see cluster.conf bears all the configurations I have added for the fail-over domain.
Hope you have enjoyed the Part 3 of this series. Last part of the Clustering guide series will be posted soon which will teach you to add resources to the cluster, sync them and start-up the cluster. Keep in touch with Tecmint for the handy HowTos.
37 thoughts on “Fencing and Adding a Failover to Clustering – Part 3”
This is the first time i am using Linux cluster so i need your help.
I had setup vcs and sun cluster before, i have a doubt here how to setup up a quorum disk in centos 7. I have a requirement to setup in prod box which is in physical server it will be a 2 node cluster and few shared lun for the data.
I am stuck in the quorum. I cannot see no one mentioning shared disk for quorum ( 500MB normally which we give ) to setup. Hope you understand my requirement and my understanding.
Is the part 4 already available sir?
Here is the link for part – 4
Do you have the part 4 yet?
Do you have notes for fencing on Centos 7 for 2 Nodes Cluster
It’s pretty much the same with RedHat. You an use the same guide.
Currently I don’t. but I will try preparing one for you through tecmint.
CMAN service started fine in Node1 and Node2 but while starting cman at Master server it is getting fail.
saying Node name cannot found.But the tags for both nodes are present .
Has anyone came across such error.
can you post the error? Did you check that CMS can resolve the node IPs?
Below is the error:
[[email protected] cluster]# service cman start
Checking if cluster has been disabled at boot… [ OK ]
Checking Network Manager… [ OK ]
Global setup… [ OK ]
Loading kernel modules… [ OK ]
Mounting configfs… [ OK ]
Starting cman… Cannot find node name in cluster.conf
Unable to get the configuration
Cannot find node name in cluster.conf
cman_tool: corosync daemon didn’t start Check cluster logs for details
Leaving fence domain… [ OK ]
Stopping gfs_controld… [ OK ]
Stopping dlm_controld… [ OK ]
Stopping fenced… [ OK ]
Stopping cman… [ OK ]
Unloading kernel modules… [ OK ]
Unmounting configfs… [ OK ]
10.91.18.145 master.uic.com – Management node
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.91.18.143 node1.uic.com node1.uic.com
10.91.18.144 node2.uic.com node2.uic.com
10.91.18.145 master.uic.com master.uic.com
All the nodes (management node i.e. master.uic.com, node1, node2 can ping each other.
My cluster.conf(10.91.18.145) is: