Fencing and Adding a Failover to Clustering – Part 3

In the previous two guides, we’ve discussed how to install cluster, creating a cluster and adding nodes to the cluster, also we’ve studied how cluster.conf appears to be after the necessary configurations are done.

Today, in this third part of clustering series, we are going to discuss about what is fencing, failover and how to configure them in our setup.

Fencing and Add Failover to Cluster
Fencing and Add Failover to Cluster – Part 3

First of all let’s see what is meant by Fencing and Failover.

What is Fencing?

If we think of a setup with more than one nodes, it is possible that one or more nodes can be failed at some point of time. So in this case fencing is isolating the malfunctioning server from the cluster in order to protect and secure the synced resources. Therefore we can add a fence to protect the resources shared within the cluster.

What is Failover?

Imagine a scenario, where a server has important data for an organization which the stakeholders need the organization to keep the server up and running without any down time experienced. In this case we can duplicate the data to another server (now there are two servers with identical data and specs) which we can use as the fail-over.

By any chance, one of the servers goes down, the other server which we have configured as the fail-over will take over the load and provides the services which were given by the first server. In this method, users will not be experienced the down time period which was caused to the primary server.

You can go through the Part 01 and Part 02 of this clustering series here:

  1. What is Clustering and Advantages/Disadvantages – Part 1
  2. Setup Cluster with Two Nodes in Linux – Part 2

As we’ve already discussed about our testing environment setup in last two articles, that we’re using three servers for this setup, the first server act as a Cluster server and other two as nodes.

Cluster Server:
Hostname: clserver.test.net

Hostname: nd01server.test.net

Hostname: nd02server.test.net   

Step 1: How to Add Fencing to Cluster Server

1. First we have to enable fencing on the cluster server, for this I will use below two commands.

# ccs -h --setfencedaemon post_fail_delay=0
# ccs -h --setfencedaemon post_join_delay=10
Enable Fencing on Cluster
Enable Fencing on Cluster

As you can see we use ccs command to add the configurations to cluster. Following are definitions of the options I have used in the command.

  1. -h: Cluster host IP address.
  2. –setfencedaemon: Applies the changes to the fencing daemon.
  3. post_fail_delay: Time in seconds which the daemon waits before fencing a victim server when a node has been failed.
  4. post_join_delay: Time in seconds which the daemon waits before fencing victim server when a node has joined the cluster.

2. Now let’s add a fence device for our cluster, execute below command to add a fence device.

# ccs -h --addfencedev tecmintfence agent=fence_virt

This is how I executed the command and how the cluster.conf file looks like after adding a fence device.

Add Fencing Device in Cluster
Add Fencing Device in Cluster

You can execute below command to see what kind of fence options you can use to create a fence device. I used fence_virt since I use VMs for my setup.

# ccs -h --lsfenceopts
Fence Options
Fence Options

Step 2: Add Two Nodes to Fence Device

3. Now I’m going to add a method to the created fence device and add hosts in to it.

# ccs -h --addmethod Method01
# ccs -h --addmethod Method01

You have to add the methods you have created while ago for the both nodes you have in your setup. Following is how I added methods and my cluster.conf.

Add Nodes to Fence Device
Add Nodes to Fence Device

4. As the next step, you will have to add the fence methods you created for the both nodes, to the fence device we created namely “tecmintfence”.

# ccs -h --addfenceinst tecmintfence Method01
# ccs -h --addfenceinst tecmintfence Method01

I have successfully associated my methods with the fence device and this is how my cluster.conf looks like now.

Add Fence to Nodes
Add Fence to Nodes

Now you have successfully configured fence device, methods and added your nodes to it. As the last step of part 03, I will now show you how to add a failover to the setup.

Step 3: Add Failover to Cluster Server

5. I use below syntax of commands to create my fail-over to the cluster setup.

# ccs -h --addfailoverdomain tecmintfod ordered
Add Failover to Cluster
Add Failover to Cluster

6. As you have created the fail-over domain, now you can add two nodes to it.

# ccs -h --addfailoverdomainnode tecmintfod 1
# ccs -h --addfailoverdomainnode tecmintfod 2
Add Nodes to Cluster Failover
Add Nodes to Cluster Failover

As it is shown above, you can see cluster.conf bears all the configurations I have added for the fail-over domain.

Hope you have enjoyed the Part 3 of this series. Last part of the Clustering guide series will be posted soon which will teach you to add resources to the cluster, sync them and start-up the cluster. Keep in touch with Tecmint for the handy HowTos.

Tutorial Feedback...
Was this article helpful? If you don't find this article helpful or found some outdated info, issue or a typo, do post your valuable feedback or suggestions in the comments to help improve this article...

If You Appreciate What We Do Here On TecMint, You Should Consider:

TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.

If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.

Support Us

We are thankful for your never ending support.

37 thoughts on “Fencing and Adding a Failover to Clustering – Part 3”

  1. This is the first time i am using Linux cluster so i need your help.

    I had setup vcs and sun cluster before, i have a doubt here how to setup up a quorum disk in centos 7. I have a requirement to setup in prod box which is in physical server it will be a 2 node cluster and few shared lun for the data.

    I am stuck in the quorum. I cannot see no one mentioning shared disk for quorum ( 500MB normally which we give ) to setup. Hope you understand my requirement and my understanding.

  2. Hi ,
    CMAN service started fine in Node1 and Node2 but while starting cman at Master server it is getting fail.
    saying Node name cannot found.But the tags for both nodes are present .
    Has anyone came across such error.

      • Hi Thilina,

        Below is the error:

        [[email protected] cluster]# service cman start
        Starting cluster:
        Checking if cluster has been disabled at boot… [ OK ]
        Checking Network Manager… [ OK ]
        Global setup… [ OK ]
        Loading kernel modules… [ OK ]
        Mounting configfs… [ OK ]
        Starting cman… Cannot find node name in cluster.conf
        Unable to get the configuration
        Cannot find node name in cluster.conf
        cman_tool: corosync daemon didn’t start Check cluster logs for details
        Stopping cluster:
        Leaving fence domain… [ OK ]
        Stopping gfs_controld… [ OK ]
        Stopping dlm_controld… [ OK ]
        Stopping fenced… [ OK ]
        Stopping cman… [ OK ]
        Unloading kernel modules… [ OK ]
        Unmounting configfs… [ OK ]

        Server details: master.uic.com – Management node node1.uic.com node2.uic.com

        /etc/hosts ( localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 node1.uic.com node1.uic.com node2.uic.com node2.uic.com master.uic.com master.uic.com

        cluster.conf (

        All the nodes (management node i.e. master.uic.com, node1, node2 can ping each other.

        Please help.


Got something to say? Join the discussion.

Have a question or suggestion? Please leave a comment to start the discussion. Please keep in mind that all comments are moderated and your email address will NOT be published.