How to Install and Configure Apache Hadoop on a Single Node in CentOS 7

Page 1 of 3123

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Use our Linode referral link if you plan to buy VPS (it starts at only $10/month).
  4. Support us via PayPal donate - Make a Donation
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

Matei Cezar

I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

You may also like...

12 Responses

  1. Cleveland says:

    I have followed the instruction and is getting the following error:

    start-dfs.sh
    Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
    It’s highly recommended that you fix the library with ‘execstack -c ‘, or link it with ‘-z noexecstack’.
    16/11/08 18:59:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    Starting namenodes on [master.hadoop.lan]
    master.hadoop.lan: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-master.out
    localhost: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.4’: No such file or directory
    master.hadoop.lan: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.3’: No such file or directory
    master.hadoop.lan: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.2’: No such file or directory
    master.hadoop.lan: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.1’: No such file or directory
    localhost: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-master.out
    master.hadoop.lan: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out’: No such file or directory
    master.hadoop.lan: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-master.out
    localhost: ulimit -a for user hadoop
    localhost: core file size (blocks, -c) 0
    localhost: data seg size (kbytes, -d) unlimited
    localhost: scheduling priority (-e) 0
    localhost: file size (blocks, -f) unlimited
    localhost: pending signals (-i) 32109
    localhost: max locked memory (kbytes, -l) 64
    localhost: max memory size (kbytes, -m) unlimited
    localhost: open files (-n) 1024
    localhost: pipe size (512 bytes, -p) 8
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
    Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
    It’s highly recommended that you fix the library with ‘execstack -c ‘, or link it with ‘-z noexecstack’.
    16/11/08 19:00:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    [hadoop@master ~]$

  2. Cleveland says:

    I have followed your tutorial. However, I am experiencing some problems when I run the command ‘start-dfs.sh’, see output below:

    stop-dfs.sh
    OpenJDK Server VM warning: You have loaded library /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
    It’s highly recommended that you fix the library with ‘execstack -c ‘, or link it with ‘-z noexecstack’.
    16/11/05 12:22:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    Stopping namenodes on [master.hadoop.lan]
    master.hadoop.lan: no namenode to stop
    localhost: no datanode to stop
    master.hadoop.lan: no datanode to stop
    Stopping secondary namenodes [0.0.0.0]
    0.0.0.0: no secondarynamenode to stop
    OpenJDK Server VM warning: You have loaded library /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
    It’s highly recommended that you fix the library with ‘execstack -c ‘, or link it with ‘-z noexecstack’.
    16/11/05 12:22:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    [hadoop@master ~]$ clear

    [hadoop@master ~]$ start-dfs.sh
    OpenJDK Server VM warning: You have loaded library /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
    It’s highly recommended that you fix the library with ‘execstack -c ‘, or link it with ‘-z noexecstack’.
    16/11/05 12:23:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
    Starting namenodes on [master.hadoop.lan]
    master.hadoop.lan: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-master.out
    localhost: mv: cannot move ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.4’ to ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.5’: No such file or directory
    localhost: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.3’: No such file or directory
    localhost: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.2’: No such file or directory
    localhost: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out.1’: No such file or directory
    master.hadoop.lan: mv: cannot stat ‘/opt/hadoop/logs/hadoop-hadoop-datanode-master.out’: No such file or directory
    localhost: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-master.out
    master.hadoop.lan: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-master.out
    localhost: ulimit -a for user hadoop
    localhost: core file size (blocks, -c) 0
    localhost: data seg size (kbytes, -d) unlimited
    localhost: scheduling priority (-e) 0
    localhost: file size (blocks, -f) unlimited
    localhost: pending signals (-i) 32109
    localhost: max locked memory (kbytes, -l) 64
    localhost: max memory size (kbytes, -m) unlimited
    localhost: open files (-n) 1024
    localhost: pipe size (512 bytes, -p) 8
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
    OpenJDK Server VM warning: You have loaded library /opt/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
    It’s highly recommended that you fix the library with ‘execstack -c ‘, or link it with ‘-z noexecstack’.
    16/11/05 12:23:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

    I am running fedora 24. My java installation is at /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc24.i386/jre/bin/java.

    my JAVA setting for the .bash_profile file is:
    ## JAVA env variables
    export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-1.b16.fc24.i386/jre
    export PATH=$PATH:$JAVA_HOME/bin
    export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar

    please let me know what I am do wrong.

  3. VALERO FERNANDEZ says:

    It works for me with this line instead of the prosed one:

    curl -LO -H “Cookie: oraclelicense=accept-securebackup-cookie” ‘http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-x64.rpm’

  4. Juliano Atanazio says:

    Why not SystemD configuration?
    Unit files… :(

  5. Amogh says:

    Great Tutorial. Thanks for taking the time to do this. I followed the tutorial to the point, but when I execute : hdfs namenode -format, I’m getting the following error:
    /opt/hadoop/bin/hdfs: line 35: /opt/hadoop/bin../libexec/hdfs-config.sh: No such file or directory
    /opt/hadoop/bin/hdfs: line 304: exec: : not found

    Any help would be appreciated. thnks!

    • Matei Cezar says:

      Seems like those scripts are not found in the correct path or don’t have the execute bit set on them. Do a recursive listing of /opt/hadoop/bin directory for those commands and set the correct path and permissions.

  6. Rvd says:

    Hi Metai

    Nice how to and I have a suggestion , we have to edit yarn-site.xml and add this

    yarn.nodemanager.aux-services.mapreduce.shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler

    yarn.resourcemanager.resource-tracker.address
    hmaster:8025

    yarn.resourcemanager.scheduler.address
    hmaster:8030

    yarn.resourcemanager.address
    hmaster:8050

    Else we will get the following error

    STARTUP_MSG: java = 1.8.0_101
    ************************************************************/
    16/09/12 14:00:46 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
    16/09/12 14:00:46 INFO namenode.NameNode: createNameNode [-format]
    16/09/12 14:00:47 WARN conf.Configuration: bad conf file: element not
    16/09/12 14:00:47 WARN conf.Configuration: bad conf file: element not
    16/09/12 14:00:47 WARN conf.Configuration: bad conf file: element not
    16/09/12 14:00:47 WARN conf.Configuration: bad conf file: element not

    Also I required hive installation steps like this fantastic how to

  7. Mokuteno says:

    It’s really helpful, thanks!

  8. David says:

    Now automate it in an Ansible role and publish it on Galaxy. Then nobody needs to do this manually anymore.

    • Ravi Saive says:

      @David,

      I totally agree with you and its saves so much of time, but I think only experts can able to automate it in Ansible, a newbie can’t….

  9. helloworld says:

    thanks , super tutorial

Got something to say? Join the discussion.

Your email address will not be published. Required fields are marked *

Join Over 300K+ Linux Users
  1. 177,942
  2. 8,310
  3. 37,548

Are you subscribed?