How to Sync Cluster Configuration and Verify Failover Setup in Nodes – Part 4

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter - Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

Thilina Uvindasiri

I'm a BSc Special (Hons) graduate in Information Technology and works as an Engineer-Systems. Love to work, explore and research Linux and play rugby as a hobby.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

You may also like...

55 Responses

  1. Arun Ghanta says:

    There are many things that don’t make sense.

    The LVM that’s added, is that a shared storage between all nodes or is it local to cluster server?

    If that is a local storage, what happens when the cluster server goes down. The data that’s on LVM which local to cluster server gets replicated across all nodes. If so to which location on member nodes?

    • There must be some packages that coordinate with LVM with cluster services. I am sure we can’t straight up add an LVM to cluster resources.
    • On top of all these.. The other nodes in the cluster just have ricci started and wait for the configuration come from cluster server to join and sync.
    • No offense but the article is half-baked. Tecmint should retire this page. Misleading and ending up half-way is what we don’t expect from tech articles.
  2. Parthiban says:

    Dear Brother,

    Interesting lesson, I got a validation error in “ccs -h 172.16.1.250 –addresource fs name=my_fs device=/dev/mapper/tecminttest_lv_vol01 mountpoint=/x01 fstype=ext3” this part below commend I have tried “ccs -h 192.168.1.87 –addresource fs name=my_fs device=/dev/sda4 mountpoint=/test fstype=ext4” kindly need a advice the session.

    Thanks
    Parthiban

  3. pila says:

    Hello interesting documentation friend I’m lost in this part.

    ccs -h 172.16.1.250 –addresource fs name = my_fs device = / dev / mapper / tecminttest_lv_vol01 mountpoint = / x01 fstype = ext3

    my question is the following one, is to know if you already have created this file system /dev/mapper/tecminttest_lv_vol01 and create it and then mount it.

  4. Nike Leung says:

    Following the step 3 and ran “ccs -h 172.16.1.250 –sync –activate” but showing the error with “unable to connect to 172.16.1.222, make sure the ricci server is started”.

    Then, I ran “service ricci status” for all three servers that showing “ricci (pid 1863) is running“….no idea what to do next. Please advice.

  5. shin says:

    HI Thank you for your sharing about cluster.

    I just got error that when I type

    # ccs -h 192.168.0.10 --sync --activate
    

    then make sure the ricci server is started

    please help I am just newbie

  6. Vaibhav says:

    Hello,

    I have completed configuration till part 3.

    Non cluster: 192.168.5.2

    2 nodes: 192.168.5.3, 192.168.5.7

    While firing this command.

    # ccs -h 192.168.5.2 --sync --activate
    

    It gives error: Unable to connect to 192.168.5.7, make sure the ricci server is started.

    However, ricci is running on all 3 servers.

    Please help me out asap.

  7. Rehab says:

    Using 3 virtual machines setup as mentioned and following the entire configuration, everything went smooth and my nodes also got successfully synced except when i tried

    # ccs -h 172.16.209.129 --checkconf
    

    and it returned

    Node: 172.16.209.128 does not match
    Node: 172.16.209.130 does not match
    

    SETUP DETAILS:

    Cluster Node :172.16.209.129
    Node 1: 172.16.209.128
    Node 2:172.16.209.128
    
  8. mugundan says:

    If remove the tag in cluster.conf file I can able to start cluster and the nodes are online but the shared drive is not showing in both of the nodes.

    If I add a tag to the cluster.conf file I am getting the error message as “Validation Failure, unable to modify configuration file (use -i to ignore this error)

Got something to say? Join the discussion.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.