top of page
Search
  • Writer's pictureMohamed Ismail

Bootstrapping a multi node Kubernetes cluster with Ubuntu 20.04 (LTS)

Updated: Jan 1, 2023




Recently I tried setting up a Kubernetes cluster with Ubuntu 20.04 version and faced lot of issues. While searching for resolution it was even more difficult, so at that point I decided to write up this blog which can help others who wants to setup a latest k8s cluster (v1.26) using Ubuntu 20.04 servers.


Kubernetes:

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.


Docker:

Docker is an open platform for developing, shipping and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications.


System requirements:

  • OS: Ubuntu 20.04 (LTS)

  • RAM: Min of 4 GB for Master node and 1GB for worker nodes

  • CPU : 2 CPUs or more

  • Nodes: 3 ( 1 x Master node and 2 x worker nodes ) (You can add how many ever worker nodes you want)


Now once the servers are provisioned and ready, lets bootstrap the cluster step by step.


Lets segregate the steps that needs to be done in master node only, worker node only and steps to be done on both nodes:


Step: 1 - Update Ubuntu ( Both Nodes )

First step is to update the package manager

sudo apt update 

Step: 2 Disabling swap and enabling IP forwarding ( Both Nodes )

In Kubernetes memory swapping may cause some performance issue so it is good to disable it and also enable IP forwarding.

Lets first check if swap is enabled

swapon --show

This command output will show the swap file name and size details, if you don't see any output then its already disabled

To disable it enter the below command

swapoff -a

Note: You can also disable the swap permanently by editing the /etc/fstab file and comment the swapfile line.


Now lets enable the IP forwarding, to do that edit the file /etc/sysctl.conf with sudo permission and uncomment the following line

net.ipv4.ip_forward = 1

After making the change now save the configuration by the following command

sysctl -p

Once done this should give you an output like below

net.ipv4.ip_forward = 1

Now lets install Docker which is the container Runtime for k8s


Step: 3 - Install Docker ( Both Nodes )


Lets install few dependency software to access docker repo

apt-get install apt-transport-https ca-certificates curl software-properties-common -y

Lets add the GPG key for authentication and add docker repo to the apt package manager.

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu  $(lsb_release -cs)  stable"

Now lets install docker-ce (Community Edition)

sudo apt install docker.ce -y 

Once the installation is success verify the version with below command

docker --version

Step: 4 Installing Kubernetes ( Both Nodes )


Before we install Kubernetes lets add the kubenetes repo and GPG key

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

After adding the repo lets update the package manager.

sudo apt update 

Now lets install the kubernetes componets - kubelet, kubeadm and kubetcl

sudo apt-get install kubeadm kubelet kubectl -y

Note: You may encounter "public key not available" issue, if that is the case please go below and look for the resolution in issues section


Use the apt-mark hold command to ensure the tools cannot be accidentally reinstalled, upgraded, or removed.

sudo apt-mark hold kubelet kubeadm kubectl

After installation is successfully completed please verify each component versions with the below commands

kubelet --version
kubectl version
kubeadm version

Now we need to update the cgroup drivers in docker. Lets do that with below commands

vi /etc/docker/daemon.json

Add the below lines and save the file

{ "exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
{ "max-size": "100m" },
"storage-driver": "overlay2"
}

Once after saving reload the daemon and enable docker service with below commands

sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
sudo systemctl status docker

Till this point we would have executed all the steps in both nodes, from next step onwards few will be specific to master only and worker only.


Step - 6 Initializing the Kubernetes control plane components (Master Node Only)

Now we need to initialize the pod network and the control-plane components of master nodes.

Most of them would get issues at this step, lets not worry about it.

kubeadm init --pod-network-cidr=10.244.0.0/16

Note: Please check the issues part where you can get resolution for issues at this stage

If the above command is successful then your control plane components are ready at this point

Now you will get a lengthy output for this command with few steps to be done.


If you are logged in as non-root user please execute the below commands

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If you are logged in as non-root user please execute the below command

export KUBECONFIG=/etc/kubernetes/admin.conf

In the same output at last you will see a join command with token as below, please copy and save it notepad as we will be using this to join the worker nodes to master node later stage.

kubeadm join 10.1.0.4:6443 --token hyhw9z.upj7w9ew2r3o7mss \
--discovery-token-ca-cert-hash sha256:822685ee2fec8cb477fbf40c37ab41a516ab41e92a140684b05f269a39ee06d9

Note: If you have cleared the screen or your token is lost, don't worry you can regenerate a new one using the below command. This will generate a new token for you.

kubeadm token create --print-join-command

Step: 7 Initializing the CNI ( Master Node Only )

To make sure the Pods communicate each other we need to install CNI (Container network Interface), there are lot of third party offerings available. In our case I have taken flannel. I will also leave the link for Wavenet.

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Wavenet (Optional):

sudo kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

Now wait for your pods to start running, you can check the pod status using the below command (-A denotes all namespaces)

kubectl get pods -A

You can see 3 kube-flannel pods running as daemon set as we have 3 nodes in total.


Step: 8 Join the Worker nodes to Master node ( Worker nodes only )

Now lets use the join command that we got during step 6 to join the worker node to master node.

Do the same steps for all the worker nodes that you created.

kubeadm join 10.1.0.4:6443 --token hyhw9z.upj7w9ew2r3o7mss \
--discovery-token-ca-cert-hash sha256:822685ee2fec8cb477fbf40c37ab41a516ab41e92a140684b05f269a39ee06d9

Note: Please check the issues part where you can get resolution for issues at this stage


If everything is fine then you would get output as below "The node has joined the cluster".


Now go back to master node and execute the below command to see the worker nodes status

sudo kubectl get nodes

Once you see all the nodes in Ready status then you have successfully bootstrapped the cluster


Issues that I faced during the setup:

Below are the issues that I faced during cluster creation at different stage.


Issue 1: "Public Key is not available: NO_PUBKEY"

If these errors aren’t fixed, apt will have problems when installing or upgrading packages.

The apt packaging system has a set of trusted keys that determine whether a package can be authenticated and therefore trusted to be installed on the system. Sometimes the system does not have all the keys it needs and runs into this issue.

We can fix this issue by the below command

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <missing-keys>

Issue 2: "Container runtime is not running- status from runtime service failed"

This is kind of known issue due to changes in containerd confg file. To fix this remove the config.toml file and restart the containerd service with below commands

sudo rm /etc/containerd/config.toml
sudo systemctl restart containerd



Pod creating using nginx image

Now lets create a pod using nginx web server image to test if everything is working fine.


Create a pod using nginx image with the below command

kubectl run nginx-web-server --image=nginx

Once pod is created check the status of it using below command (-o wide gives more information about the pods)

kubectl get pods -o wide

Now lets expose this pod using a nodeport service

kubectl expose pod/nginx-web-server --type=NodePort --port=80

Lets list the service to get the node port details


Now lets access the nginx app using the public IP of worker node 2 with the node port shown in the service.


It works...

Conclusion:

In this blog we have seen how to setup a multinode kubernetes cluster from scratch using ubuntu 20.04 server and to troubleshoot and resolve the issues that we get. See you soon in another blog. Thanks for reading please give it a thumbs up if it was useful to you and share it with your friends!!!


372 views1 comment
bottom of page