How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS/RHEL 7

If you are a person who is, or has been in the past, in charge of inspecting and analyzing system logs in Linux, you know what a nightmare that task can become if multiple services are being monitored simultaneously.

In days past, that task had to be done mostly manually, with each log type being handled separately. Fortunately, the combination of Elasticsearch, Logstash, and Kibana on the server side, along with Filebeat on the client side, makes that once difficult task look like a walk in the park today.

The first three components form what is called an ELK stack, whose main purpose is to collect logs from multiple servers at the same time (also known as centralized logging).

Suggested Read: 4 Good Open Source Log Monitoring and Management Tools for Linux

A built-in java-based web interface allows you to inspect logs quickly at a glance for easier comparison and troubleshooting. These client logs are sent to a central server by Filebeat, which can be described as a log shipping agent.

Let’s see how all of these pieces fit together. Our test environment will consist of the following machines:

Central Server: CentOS 7 (IP address: 2 GB of RAM.
Client #1: CentOS 7 (IP address: 1 GB of RAM.
Client #2: Debian 8 (IP address: 1 GB of RAM.

Please note that the RAM values provided here are not strict prerequisites, but recommended values for successful implementation of the ELK stack on the central server. Less RAM on clients will not make much difference, if any, at all.

Installing ELK Stack on the Server

Let’s begin by installing the ELK stack on the server, along with a brief explanation on what each component does:

  1. Elasticsearch stores the logs that are sent by the clients.
  2. Logstash processes those logs.
  3. Kibana provides the web interface that will help us to inspect and analyze the logs.

Install the following packages on the central server. First off, we will install Java JDK version 8 (update 102, the latest one at the time of this writing), which is a dependency of the ELK components.

You may want to check first in the Java downloads page here to see if there is a newer update available.

# yum update
# cd /opt
# wget --no-cookies --no-check-certificate --header "Cookie:; oraclelicense=accept-securebackup-cookie" ""
# rpm -Uvh jre-8u102-linux-x64.rpm

Time to check whether the installation completed successfully:

# java -version
Check Java Version from Commandline

Check Java Version from Commandline

To install the latest versions of Elasticsearch, Logstash, and Kibana, we will have to create repositories for yum manually as follows:

Enable Elasticsearch Repository

1. Import the Elasticsearch public GPG key to the rpm package manager:

# rpm --import

2. Insert the following lines to the repository configuration file elasticsearch.repo:

name=Elasticsearch repository

3. Install the Elasticsearch package.

# yum install elasticsearch

When the installation is complete, you will be prompted to start and enable elasticsearch:

Install Elasticsearch in Linux

Install Elasticsearch in Linux

4. Start and enable the service.

# systemctl daemon-reload
# systemctl enable elasticsearch
# systemctl start elasticsearch

5. Allow traffic through TCP port 9200 in your firewall:

# firewall-cmd --add-port=9200/tcp
# firewall-cmd --add-port=9200/tcp --permanent

6. Check if Elasticsearch responds to simple requests over HTTP:

# curl -X GET http://localhost:9200

The output of the above command should be similar to:

Verify Elasticsearch Installation

Verify Elasticsearch Installation

Make sure you complete the above steps and then proceed with Logstash. Since both Logstash and Kibana share the Elasticsearch GPG key, there is no need to re-import it before installing the packages.

Suggested Read: Manage System Logs (Configure, Rotate and Import Into Database) in CentOS 7

Enable Logstash Repository

7. Insert the following lines to the repository configuration file logstash.repo:


8. Install the Logstash package:

# yum install logstash

9. Add a SSL certificate based on the IP address of the ELK server at the the following line below the [ v3_ca ] section in /etc/pki/tls/openssl.cnf:

[ v3_ca ]
subjectAltName = IP:
Add Elasticsearch Server IP Address

Add Elasticsearch Server IP Address

10. Generate a self-signed certificate valid for 365 days:

# cd /etc/pki/tls
# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

11. Configure Logstash input, output, and filter files:

Input: Create /etc/logstash/conf.d/input.conf and insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:

input {
  beats {
	port => 5044
	ssl => true
	ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
	ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

Output (/etc/logstash/conf.d/output.conf) file:

output {
  elasticsearch {
	hosts => ["localhost:9200"]
	sniffing => true
	manage_template => false
	index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
	document_type => "%{[@metadata][type]}"

Filter (/etc/logstash/conf.d/filter.conf) file. We will log syslog messages for simplicity:

filter {
if [type] == "syslog" {
	grok {
  	match => { "message" => "%{SYSLOGLINE}" }

	date {
match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

12. Verify the Logstash configuration files.

# service logstash configtest
Verify Logstash Configuration

Verify Logstash Configuration

13. Start and enable logstash:

# systemctl daemon-reload
# systemctl start logstash
# systemctl enable logstash

14. Configure the firewall to allow Logstash to get the logs from the clients (TCP port 5044):

# firewall-cmd --add-port=5044/tcp
# firewall-cmd --add-port=5044/tcp --permanent

Enable Kibana Repository

14. Insert the following lines to the repository configuration file kibana.repo:

name=Kibana repository

15. Install the Kibana package:

# yum install kibana

16. Start and enable Kibana.

# systemctl daemon-reload
# systemctl start kibana
# systemctl enable kibana

17. Make sure you can access access Kibana’s web interface from another computer (allow traffic on TCP port 5601):

# firewall-cmd --add-port=5601/tcp
# firewall-cmd --add-port=5601/tcp --permanent

18. Launch Kibana ( to verify that you can access the web interface:

Access Kibana Web Interface

Access Kibana Web Interface

We will return here after we have installed and configured Filebeat on the clients.

Suggested Read: Monitor Server Logs in Real-Time with “” Tool in Linux

Install Filebeat on the Client Servers

We will show you how to do this for Client #1 (repeat for Client #2 afterwards, changing paths if applicable to your distribution).

1. Copy the SSL certificate from the server to the clients:

# scp /etc/pki/tls/certs/logstash-forwarder.crt [email protected]:/etc/pki/tls/certs/

2. Import the Elasticsearch public GPG key to the rpm package manager:

# rpm --import

3. Create a repository for Filebeat (/etc/yum.repos.d/filebeat.repo) in CentOS based distributions:

name=Filebeat for ELK clients

4. Configure the source to install Filebeat on Debian and its derivatives:

# aptitude install apt-transport-https
# echo "deb stable main" > /etc/apt/sources.list.d/filebeat.list
# aptitude update

5. Install the Filebeat package:

# yum install filebeat        [On CentOS and based Distros]
# aptitude install filebeat   [On Debian and its derivatives]

6. Start and enable Filebeat:

# systemctl start filebeat
# systemctl enable filebeat

Configure Filebeat

A word of caution here. Filebeat configuration is stored in a YAML file, which requires strict indentation. Be careful with this as you edit /etc/filebeat/filebeat.yml as follows:

  1. Under paths, indicate which log files should be “shipped” to the ELK server.
  2. Under prospectors:
input_type: log
document_type: syslog
  1. Under output:
    1. Uncomment the line that begins with logstash.
    2. Indicate the IP address of your ELK server and port where Logstash is listening in hosts.
    3. Make sure the path to the certificate points to the actual file you created in Step I (Logstash section) above.

The above steps are illustrated in the following image:

Configure Filebeat in Client Servers

Configure Filebeat in Client Servers

Save changes, and then restart Filebeat on the clients:

# systemctl restart filebeat

Once we have completed the above steps on the clients, feel free to proceed.

Testing Filebeat

In order to verify that the logs from the clients can be sent and received successfully, run the following command on the ELK server:

# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

The output should be similar to (notice how messages from /var/log/messages and /var/log/secure are being received from client1 and client2):

Testing Filebeat

Testing Filebeat

Otherwise, check the Filebeat configuration file for errors.

# journalctl -xe

after attempting to restart Filebeat will point you to the offending line(s).

Testing Kibana

After we have verified that logs are being shipped by the clients and received successfully on the server. The first thing that we will have to do in Kibana is configuring an index pattern and set it as default.

You can describe an index as a full database in a relational database context. We will go with filebeat-* (or you can use a more precise search criteria as explained in the official documentation).

Enter filebeat-* in the Index name or pattern field and then click Create:

Testing Kibana

Testing Kibana

Please note that you will be allowed to enter a more fine-grained search criteria later. Next, click the star inside the green rectangle to configure it as the default index pattern:

Configure Default Kibana Index Pattern

Configure Default Kibana Index Pattern

Finally, in the Discover menu you will find several fields to add to the log visualization report. Just hover over them and click Add:

Add Log Visualization Report

Add Log Visualization Report

The results will be shown in the central area of the screen as shown above. Feel free to play around (add and remove fields from the log report) to become familiar with Kibana.

By default, Kibana will display the records that were processed during the last 15 minutes (see upper right corner) but you can change that behavior by selecting another time frame:

Kibana Log Reports

Kibana Log Reports


In this article we have explained how to set up an ELK stack to collect the system logs sent by two clients, a CentOS 7 and a Debian 8 machines.

Now you can refer to the official Elasticsearch documentation and find more details on how to use this setup to inspect and analyze your logs more efficiently.

If you have any questions, don’t hesitate to ask. We look forward to hearing from you.

Best Affordable Linux and WordPress Services For Your Business
Outsource Your Linux and WordPress Project and Get it Promptly Completed Remotely and Delivered Online.

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter - Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

Gabriel Cánepa

Gabriel Cánepa is a GNU/Linux sysadmin and web developer from Villa Mercedes, San Luis, Argentina. He works for a worldwide leading consumer product company and takes great pleasure in using FOSS tools to increase productivity in all areas of his daily work.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

You may also like...

34 Responses

  1. emmanuel says:

    Please what’s the path for the kibana.yml, I seem not to get the directory.

  2. Avin says:

    Very nice tutorial..

    Just correct days in opeenssl self-sign certificate command.

    # openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

  3. Sushant Joshi says:

    Hello Sir,

    I have deployed ELK on CentOS 7 and everything does look fine but the problem that I have been facing is repeated logs. I have been getting the same logs again and again even from previous days. It would be great if you could help me out.


  4. Viral Desai says:

    How to ship apache web server logs to elasticsearch ?

  5. Lokman says:

    Dear Brother, I will continue to install ELK flow as your instruction previous all command work properly but when I apply this command then command not work properly

    # aptitude install apt-transport-https

    Please help me.

  6. savi says:


    Warning No default index pattern. You must select or create one to continue.

    Output from elastic server and kibana:

    curl ‘localhost:9200/_cat/indices?v’
    health status index pri rep docs.count docs.deleted store.size
    yellow open .kibana 1 1 1 0 3.1kb 3.1kb
    curl -XGET ‘http://localhost:9200/filebeat-*/_search?pretty’
    “took” : 1,
    “timed_out” : false,
    “_shards” : {
    “total” : 0,
    “successful” : 0,
    “failed” : 0
    “hits” : {
    “total” : 0,
    “max_score” : 0.0,
    “hits” : [ ]

    So, I am unable to find out the issue, is there something missing.

  7. Parth says:

    I have installed ELK stack using the set up mentioned and it seems to have installed properly. all the services are running fine. However i am unable to access kibana on the my server ip and port. it says site can’t be reached. I have crated firewall exception as mentioned above. i am getting curl response on localhost:5601.

  8. VK says:

    Hello everyone !

    I am trying to fetch log file data from more than two agent-node to logstash server, but i am able to fetch data from one agent-node only.

    This document is very good and I have followed the same to create elasticsearch environment except accepting log files from different server.

    can anyone suggest how to resolve this ?

  9. Rodney says:

    I recommend OpenJDK since it comes as a package already for CentOS7.

    yum install java-1.8.0-openjdk-headless

    Headless version leaves out graphical components and package dependencies which is nice for a server-only install.

  10. vinodraj says:

    # service logstash configtest
    doesnt work with elk version 5 on centos 7

Got something to say? Join the discussion.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.