How to Install a Kubernetes Cluster on CentOS 7

Donated by Google to the Opensource community, Kubernetes has now become the container management tool of choice. It can manage and orchestrate not just docker runtimes but also Containers and Rkt runtimes.

A typical Kubernetes cluster would generally have a master node and several worker-nodes or Minions. The worker-nodes are then managed from the master node, thus ensuring that the cluster is managed from a central point.

It’s also important to mention that you can also deploy a single-node Kubernetes cluster which is generally recommended for very light, non-production workloads. For this, you can use Minikube, which is a tool that runs a single-node Kubernetes cluster in a virtual machine on your node.

Recommended Read: How to Install a Kubernetes Cluster on CentOS 8

For this tutorial, we will walk-through a multi-node Kubernetes cluster installation on CentOS 7 Linux. This tutorial is command-line based so you will need access to your terminal window.

Prerequisites

  1. Multiple servers running Centos 7 (1 Master Node, 2 Worker Nodes). It is recommended that your Master Node have at least 2 CPUs, though this is not a strict requirement.
  2. Internet connectivity on all your nodes. We will be fetching Kubernetes and docker packages from the repository. Equally, you will need to make sure that the yum package manager is installed by default and can fetch packages remotely.
  3. You will also need access to an account with sudo or root privileges. In this tutorial, I will be using my root account.

Our 3-node cluster will look something like this:

Kubernetes Cluster Diagram

Kubernetes Cluster Diagram

Installation of Kubernetes Cluster on Master-Node

For Kubernetes to work, you will need a containerization engine. For this installation, we will use docker as it is the most popular.

The following steps will run on the Master-Node.

Step 1: Prepare Hostname, Firewall and SELinux

On your master node, set the hostname and if you don’t have a DNS server, then also update your /etc/hosts file.

# hostnamectl set-hostname master-node
# cat <<EOF>> /etc/hosts
10.128.0.27 master-node
10.128.0.29 node-1 worker-node-1
10.128.0.30 node-2 worker-node-2
EOF

You can ping worker-node-1 and worker-node-2 to test if your updated hostfile is fine using ping command.

# ping 10.128.0.29
# ping 10.128.0.30

Next, disable SElinux and update your firewall rules.

# setenforce 0
# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
# reboot

Set the following firewall rules on ports. Make sure that each firewall-cmd command, returns a success.

# firewall-cmd --permanent --add-port=6443/tcp
# firewall-cmd --permanent --add-port=2379-2380/tcp
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=10251/tcp
# firewall-cmd --permanent --add-port=10252/tcp
# firewall-cmd --permanent --add-port=10255/tcp
# firewall-cmd –reload
# modprobe br_netfilter
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Step 2: Setup the Kubernetes Repo

You will need to add Kubernetes repositories manually as they do not come installed by default on CentOS 7.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Step 3: Install Kubeadm and Docker

With the package repo now ready, you can go ahead and install kubeadm and docker packages.

# yum install kubeadm docker -y 

When the installation completes successfully, enable and start both services.

# systemctl enable kubelet
# systemctl start kubelet
# systemctl enable docker
# systemctl start docker

Step 4: Initialize Kubernetes Master and Setup Default User

Now we are ready to initialize kubernetes master, but before that you need to disable swap in order to run “kubeadm init“ command.

# swapoff -a

Initializing Kubernetes master is a fully automated process that is managed by the “kubeadm init“ command which you will run.

# kubeadm init
Initialize Kubernetes Master

Initialize Kubernetes Master

You may want to copy the last line and save it somewhere because you will need to run it on the worker nodes.

kubeadm join 10.128.0.27:6443 --token nu06lu.xrsux0ss0ixtnms5  \ --discovery-token-ca-cert-hash sha256:f996ea3564e6a07fdea2997a1cf8caeddafd6d4360d606dbc82314688425cd41 

Tip: Sometimes this command might complain about the arguments (args) passed, so edit it to avoid any errors. So, you will delete the ‘\’ character accompanying the --token and your final command will look like this.

kubeadm join 10.128.0.27:6443 --token nu06lu.xrsux0ss0ixtnms5  --discovery-token-ca-cert-hash sha256:f996ea3564e6a07fdea2997a1cf8caeddafd6d4360d606dbc82314688425cd41

Having initialized Kubernetes successfully, you will need to allow your user to start using the cluster. In our case, we want to run this installation as root user, therefore we will go ahead and run these commands as root. You can change to a sudo enabled user you prefer and run the below using sudo.

To use root, run:

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

To use a sudo enabled user, run:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now check to see if the kubectl command is activated.

# kubectl get nodes
Check Status of Nodes

Check Status of Nodes

At this point, you will also notice that the status of the master-node is ‘NotReady’. This is because we are yet to deploy the pod network to the cluster.

The pod Network is the overlay network for the cluster, that is deployed on top of the present node network. It is designed to allow connectivity across the pod.

Step 5: Setup Your Pod Network

Deploying the network cluster is a highly flexible process depending on your needs and there are many options available. Since we want to keep our installation as simple as possible, we will use Weavenet plugin which does not require any configuration or extra code and it provides one IP address per pod which is great for us. If you want to see more options, please check here.

These commands will be important to get the pod network setup.

# export kubever=$(kubectl version | base64 | tr -d '\n')
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
Setup Pod Network

Setup Pod Network

Now if you check the status of your master-node, it should be ‘Ready’.

# kubectl get nodes
Check Status of Master Nodes

Check Status of Master Nodes

Next, we add the worker nodes to the cluster.

Setting Up Worker Nodes to Join Kubernetes Cluster

The following steps will run on the worker nodes. These steps should be run on every worker node when joining the Kubernetes cluster.

Step 1: Prepare Hostname, Firewall and SELinux

On your worker-node-1 and worker-node-2, set the hostname and in case you don’t have a DNS server, then also update your master and worker nodes on /etc/hosts file.

# hostnamectl set-hostname 'node-1'
# cat <<EOF>> /etc/hosts
10.128.0.27 master-node
10.128.0.29 node-1 worker-node-1
10.128.0.30 node-2 worker-node-2
EOF

You can ping master-node to test if your updated hostfile is fine.

Next, disable SElinux and update your firewall rules.

# setenforce 0
# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Set the following firewall rules on ports. Make sure that all firewall-cmd commands, return success.

# firewall-cmd --permanent --add-port=6783/tcp
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=10255/tcp
# firewall-cmd --permanent --add-port=30000-32767/tcp
# firewall-cmd  --reload
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Step 2: Setup the Kubernetes Repo

You will need to add Kubernetes repositories manually as they do not come pre-installed on CentOS 7.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Step 3: Install Kubeadm and Docker

With the package repo now ready, you can go ahead and install kubeadm and docker packages.

# yum install kubeadm docker -y 

Start and enable both the services.

# systemctl enable docker
# systemctl start docker
# systemctl enable kubelet
# systemctl start kubelet

Step 4: Join the Worker Node to the Kubernetes Cluster

We now require the token that kubeadm init generated, to join the cluster. You can copy and paste it to your node-1 and node-2 if you had copied it somewhere.

# kubeadm join 10.128.0.27:6443 --token nu06lu.xrsux0ss0ixtnms5  --discovery-token-ca-cert-hash sha256:f996ea3564e6a07fdea2997a1cf8caeddafd6d4360d606dbc82314688425cd41 
Join Nodes to Kubernets Cluster

Join Nodes to Kubernets Cluster

As suggested on the last line, go back to your master-node and check if worker node-1 and worker node-2 have joined the cluster using the following command.

# kubectl get nodes
Check All Nodes Status in Kubernetes Cluster

Check All Nodes Status in Kubernetes Cluster

If all the steps run successfully, then, you should see node-1 and node-2 in ready status on the master-node.

Recommended Read: How to Deploy Nginx on a Kubernetes Cluster

At this point, we have successfully completed an installation of a Kubernetes cluster on Centos 7 and we have successfully on-boarded two worker-nodes. You can now begin to create your pods and deploy your services.

If You Appreciate What We Do Here On TecMint, You Should Consider:

TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.

If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.

Support Us

We are thankful for your never ending support.

Kioie Eddy

Work as a Cloud Architect in Nairobi, Kenya. Spend my time designing Cloud and DevOps architectures with a focus on Opensource. Also a contributor to several Opensource projects.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide
The Complete Linux System Administrator Bundle
Become an Ethical Hacker Bonus Bundle

You may also like...

34 Responses

  1. Mangesh says:

    I have Installed Kubernetes by following your guide on AWS ec2 instance and deployed sample HTML application on it.

    Now I want to access my sample app using dns name like app.example.com.

    How can I do that. Any idea?

    • Vamshi says:

      Mangesh,

      This is a 2 parts solution.

      Part 1 – Domain:

      • If you have a public domain of your own then you are sorted and you can follow part2, all you have to do is to create A records to the domain on your DNS management.
      • If you do not have a public domain then you will need to use a private domain and force your local workstation to use that DNS server. Like you could set up a local DNS server like dnsmasq and user your laptop to use that before any other DNS IPs.

      All of this is assuming that you are using public IPs on your ec2 instances. if you are using private you will need LB with public IP OR a VPN from your local to AWS which I don’t think you would have considering the expenses in it.

      Part 2 – Accessing the domain:

      • Option 1 – You could use type: LoadBalancer in your service and it creates the LB (provided you gave it the access). And you can point your domain to LB IP and access it. To make it future proof for more domains coming you can even set up ingress-controller with its service as a Load balancer and create ingress for your app. That way not just this but future domains can be all hosted on the same LB saving you money.
      • Option 2 – You could create ingress-controller like Nginx as daemon set instead of deployment and enable the host port and hostIPs. That way it uses node IPs. However this will need to have your nodes public IPs to work.. and then have ingress created. Point your domain to any of the Node IPs and you can access your site.
      • Option 3 – Create your service as type: NodePort and point your domain to any of the node IPs. then go to app.example.com:NODEPORT and you can access your app.

      Part 2 assumes that you have public IPs for your nodes. If that’s not the case Option1 still works., just that you pay for LB.

  2. ravi says:
    [[email protected] ~]# firewall-cmd --permanent --add-port=30000-32767/tcp
    success
    [[email protected] ~]# firewall-cmd  --reload
    success
    [[email protected] ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
    -bash: /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
    

    what is the reason? Please could you help?

    • Olami says:

      Ran into the same issue, but fixed it with the following command.

      # modprobe br_netfilter
      

      then re-run this:

      # echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
      
  3. Adithyan says:

    Hi, I am getting the below error while doing Kube init.

    [[email protected] /]# kubeadm init
    W0323 19:50:41.915357 105851 validation.go:28] Cannot validate kube-proxy config – no validator is available
    W0323 19:50:41.915462 105851 validation.go:28] Cannot validate kubelet config – no validator is available
    [init] Using Kubernetes version: v1.17.4
    [preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    [WARNING HTTPProxy]: Connection to “https://10.127.200.79” uses proxy “http://72.163.217.40:8080”. If that is not intended, adjust your proxy settings
    [WARNING HTTPProxyCIDR]: connection to “10.96.0.0/12” uses proxy “http://72.163.217.40:8080”. This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    [ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    [ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    [ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
    [ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
    [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
    To see the stack trace of this error execute with –v=5 or higher

    • Vamshi says:

      Probably you have initiated already it before. Try “kubeadm reset” that should reset everything and you can re-initiate again.

  4. cycsky says:

    @Kioie, Thanks for your great article. I’ve used 3 VMS in Hyper-V to build the cluster. After I completed according to your guide, I got the next information when I run `kubectl get nodes` on a master node.

    The slaves are not ready. When I run `kubectl version` on the slave node, I got a message as ‘The connection to the server localhost:8080 was refused – did you specify the right host or port?’. Could you give me some advice about it? Thanks!

    [[email protected] .kube]# kubectl get nodes
    NAME          STATUS     ROLES    AGE    VERSION
    master-node   Ready      master   126m   v1.17.3
    node-1        NotReady      36m    v1.17.4
    node-2        NotReady      18m    v1.17.4
    
    • Vamshi says:

      @cycaky

      That’s very likely due to the weave network pods not running on the worker nodes. On master do ‘kubectl get pods -n kube-system -owide’ and you will see the weave pods on nodes will be struggling. If yes, can you see the pod logs to know what are they struggling with?

Got something to say? Join the discussion.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.