How to Sync Cluster Configuration and Verify Failover Setup in Nodes – Part 4

Hello folks. First of all, my apologies for the delay of the last part of this cluster series. Let’s get on to work without getting any more delayed.

As we many of you have completed all three previous parts, I will brief you what we have completed so far. Now we already have enough knowledge to install and configure cluster packages for two nodes and enable fencing and failover in a clustered environment.

Sync Cluster Configuration and Verify FailOver
Sync Cluster Configuration and Verify FailOver – Part 4

You can refer my previous parts if you don’t remember since it took a little longer to post the last part.

Introduction to Linux Clustering and Advantages/Disadvanges of Clustering – Part 1

How to Install and Configure Cluster with Two Nodes in Linux – Part 2

Fencing and Adding a Failover to Clustering – Part 3

We will start by adding resources to the cluster. In this case we can add a file system or a web service as your need. Now I have /dev/sda3 partition mounted to /x01 which I wish to add as a file system resource.

1. I use below command to add a file system as a resource:

# ccs -h --addresource fs name=my_fs device=/dev/mapper/tecminttest_lv_vol01 mountpoint=/x01 fstype=ext3
Add Filesystem to Cluster
Add Filesystem to Cluster

Additionally, if you want to add a service also, you can by using below methodology. Issue the following command.

# ccs -h --addservice my_web domain=testdomain recovery=relocate autostart=1

You can verify it by viewing the cluster.conf file as we did in previous lessons.

2. Now enter following entry in cluster.conf file to add a reference tag to the service.

<fs ref="my_fs"/>
Add Service to Cluster
Add Service to Cluster

3. All set. No we will see how we can sync the configurations we made to cluster among the 2 nodes we have. Following command will do the needful.

# ccs -h --sync --activate
Sync Cluster Configuration
Sync Cluster Configuration

Note: Enter passwords we set for ricci in the early stages when we were installing packages.

You can verify your configurations by using below command.

# ccs -h --checkconf
Verify Cluster Configuration
Verify Cluster Configuration

4. Now it’s time to start the things up. You can use one of below commands as you prefer.

To start only one node use the command with relevant IP.

# ccs -h start

Or if you want to start all nodes use --startall option as follows.

# ccs -h –startall

You can use stop or --stopall if you needed to stop the cluster.

In a scenario like if you wanted to start the cluster without enabling the resources (resources will automatically be enabled when the cluster is started), like a situation where you have intentionally disabled the resources in a particular node in order to disable fencing loops, you don’t want to enable those resources when the cluster is starting.

For that purpose you can use below command which starts the cluster but does not enable the resources.

# ccs -h --startall --noenable 

5. After the cluster has been started up, you can view the stats by issuing clustat command.

# clustat
Check Cluster Status
Check Cluster Status

Above output says there are two nodes in the cluster and both are up and running at the moment.

6. You can remember we have added a failover mechanism in our previous lessons. Want to check it works? This is how you do it. Force shutdown one node and look for cluster stats using clustat command for the results of failover.

I have shut down my node02server( using shutdown -h now command. Then executed clustat command from my cluster_server(

Check Cluster FailOver
Check Cluster FailOver

Above output clarifies you that node 1 is online while node 2 has gone offline as we shut it down. Yet service and the file system we shared are still online as you can see if you check it on node01 which is online.

# df -h /x01
Verify Cluster Node
Verify Cluster Node

Refer the cluster.conf file with whole config set relevant to our setup used for tecmint.

<?xml version="1.0"?>
<cluster config_version="15" name="tecmint_cluster">
        <fence_daemon post_join_delay="10"/>
                <clusternode name="" nodeid="1">
                                <method name="Method01">
                                        <device name="tecmintfence"/>
                <clusternode name="" nodeid="2">
                                <method name="Method01">
                                        <device name="tecmintfence"/>
                <fencedevice agent="fence_virt" name="tecmintfence"/>
                        <failoverdomain name="tecmintfod" nofailback="0" ordered="1" restricted="0">
                                <failoverdomainnode name="" priority="1"/>
                                <failoverdomainnode name="" priority="2"/>
                        <fs device="/dev/mapper/tecminttest_lv_vol01" fstype="ext3" mountpoint="/x01" name="my_fs"/>
                <service autostart="1" domain="testdomain" name="my_web" recovery="relocate"/>
                <fs ref="my_fs"/>

Hope you’ll enjoyed the whole series of clustering lessons. Keep in touch with tecmint for more handy guides everyday and feel free to comment your ideas and queries.

If You Appreciate What We Do Here On TecMint, You Should Consider:

TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.

If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.

Support Us

We are thankful for your never ending support.

55 thoughts on “How to Sync Cluster Configuration and Verify Failover Setup in Nodes – Part 4”

  1. There are many things that don’t make sense.

    The LVM that’s added, is that a shared storage between all nodes or is it local to cluster server?

    If that is a local storage, what happens when the cluster server goes down. The data that’s on LVM which local to cluster server gets replicated across all nodes. If so to which location on member nodes?

    • There must be some packages that coordinate with LVM with cluster services. I am sure we can’t straight up add an LVM to cluster resources.
    • On top of all these.. The other nodes in the cluster just have ricci started and wait for the configuration come from cluster server to join and sync.
    • No offense but the article is half-baked. Tecmint should retire this page. Misleading and ending up half-way is what we don’t expect from tech articles.
  2. Dear Brother,

    Interesting lesson, I got a validation error in “ccs -h –addresource fs name=my_fs device=/dev/mapper/tecminttest_lv_vol01 mountpoint=/x01 fstype=ext3” this part below commend I have tried “ccs -h –addresource fs name=my_fs device=/dev/sda4 mountpoint=/test fstype=ext4” kindly need a advice the session.


  3. Hello interesting documentation friend I’m lost in this part.

    ccs -h –addresource fs name = my_fs device = / dev / mapper / tecminttest_lv_vol01 mountpoint = / x01 fstype = ext3

    my question is the following one, is to know if you already have created this file system /dev/mapper/tecminttest_lv_vol01 and create it and then mount it.

  4. Following the step 3 and ran “ccs -h –sync –activate” but showing the error with “unable to connect to, make sure the ricci server is started”.

    Then, I ran “service ricci status” for all three servers that showing “ricci (pid 1863) is running“….no idea what to do next. Please advice.

  5. HI Thank you for your sharing about cluster.

    I just got error that when I type

    # ccs -h --sync --activate

    then make sure the ricci server is started

    please help I am just newbie

  6. Hello,

    I have completed configuration till part 3.

    Non cluster:

    2 nodes:,

    While firing this command.

    # ccs -h --sync --activate

    It gives error: Unable to connect to, make sure the ricci server is started.

    However, ricci is running on all 3 servers.

    Please help me out asap.

  7. Using 3 virtual machines setup as mentioned and following the entire configuration, everything went smooth and my nodes also got successfully synced except when i tried

    # ccs -h --checkconf

    and it returned

    Node: does not match
    Node: does not match


    Cluster Node :
    Node 1:
    Node 2:
  8. If remove the tag in cluster.conf file I can able to start cluster and the nodes are online but the shared drive is not showing in both of the nodes.

    If I add a tag to the cluster.conf file I am getting the error message as “Validation Failure, unable to modify configuration file (use -i to ignore this error)


Leave a Reply to pila Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.