Click here to Skip to main content
15,881,380 members
Articles / Containers / Kubernetes

3 Nodes Kubernetes Cluster for Home Lab - Quick & Dirty!

Rate me:
Please Sign up or sign in to vote.
3.00/5 (4 votes)
7 Jun 2019CPOL9 min read 21.8K   4  
Setting up 3 nodes Kubernetes cluster for Home Lab- Quick & Dirty!

Introduction

Container & orchestration are new buzzwords of the decade. It’s interesting & challenging.
Learning should be completely hands on and the same requires a good home lab. I will walk you through a Home Lab, I have setup in a series of blogs as well as put together some good reference for advance understanding- it’s a work in progress and I will update accordingly.

Background

There are online cloud based options but not cheaper in the long run, one should avoid any unexpected surprise (you can forget to shutdown your VMs) - I want my kubernetes cluster to run 24 by 7 and not paying for anything other than electricity bill.

On second thoughts, creating your own kubernetes cluster and maintaining the same will give you immense insight which you won’t get when using AWS, Azure or Google k8s services.

On a different note: This is a very good opportunity to exercise your virtualization skills too. I will try to give you a quick overview of things you should be familiar with to start with kubernetes Home lab.

Bare Metal

This is crucial - it depends on what you want to learn and how you want to learn.

Type 2 hypervisor - Like virtualbox, a great way to start, but this is just to warm up your basics while you are going through hands on learning - I will provide some quick steps to make you up and running with a 3 node k8s cluster over an hour in the following section.

Type 1 hypervisor like VMware ESXi - This is a real deal and my ultimate goal.

You need some real bad boys like Dell PowerEdge R820 (or previous generations of servers), these are really cheap in ebay (watch this series - I will have something interesting).

Here is a very good article on the difference of the two.

You need a good machine - refer to my old blog “Build your own powerful desktop” (for type 2 hypervisor, you do not need a server grade machine, but something with hyperthreading, sufficient number of cores & threads, good amount of memory).

Specifically for this setup, the following would be ideal (following nodes can be actual machine or VM or mix of the two do not matter).

One master node - 4 vCPu, more than 4 GBs of memory, 30 GB disk space

Tow worker nodes- 2 vCPu, more than 2 GBs of memory, 20 GB disk space

Networking

Understanding of your home network is important as well as virtualbox networking options.

Later in the article (or in the next port), I will provide some good reference on Docker & Kubernetes networking-but the same is not important to set up the lab.

Following is my home setup (I do have software VPN, as well as endpoint protection but not including the same to avoid complexity).

(Skip this section if you are using virtualbox or some kind of other type 2 hypervisor with “host only” “Nated” network.)

Understand Your Home Network & How You Can Best Use Your Devices

You either have Modem and router combined in one single device common with most of the ISPs or like me, you can have two separate devices. I have a Google wi-fi router - it has its own advantages (you can build wireless mesh around your home so can keep your power hungry hot server in basement and connect from your main workstation remotely).

Identify your DHCP provider (server) - in my case, it's my Google router which is leasing IP to all of my home devices.

A switch is very handy - if you have many devices, a wifi router with ethernet options allow you to access all of your wireless and wired devices together. At this point, make sure each device (we will discuss about VM access later) can ping each other - it should be by default, but you might need some changes in your firewall/anti virus endpoint protection and VPN (if you are using one).

  1. If you have anti virus software in your devices, the same will manage the firewall - even if all your devices are connected to private/home network, like for Bitdefender, the following link has good step by step details, make sure you are doing the same with your firewall protection.
  2. If you have VPN, make sure local network sharing is enabled.

So why I am not in favour of NAT or Host Only? Those are most easiest way to up and running any number of VMs in a private network - Yes that’s true, but it’s not scalable in so many ways and neither does it give you an option to make the best utilization of all of your devices.

Note: I will skip over downloading, installation virtualbox, provisioning VMs and presume you already know the basics.

First Scalability (& Usage)

3 VMs will use 8 vCPu (4 CPU cores) and your workstation will run on high CPU usage.

Scalability will be an issue if you want to add more nodes, HA to master node, etc., you will run out of number of available threads, memory and remember this is your workstation, not a dedicated server, you do other things too!!

I play games (& work in parallel) even if my machine configurations are really good (check it here)- when you are playing Ghost Recon with 6vCPU (out of 12) already in use - you are unnecessarily inviting throttling.

2nd Max Utilization of Spare & Old Devices

  1. I have 3 laptops
  2. 11 years old Dell inspiron (I was about to throw it away-2/2 core-3 GB)
  3. One chromebook (which I can dual boot with linux- 2/4-8 GB)
  4. Another Dell laptop with linux mint ( 2/4-8 GB)
  5. One PC 6 core (12 vCPU-32 GB)
  6. Few spare phones and tablets -I can boot Linux on Android phone and tablet (not officially) and add them to cluster (experimental - never tried before, but I can connect all of the wireless devices from cluster-wait for my next installment on the same)

In total, I can squeeze around 24-26 CPU threads spread across all my wired or wireless devices without overdoing any of them.

Advantage

My old laptop which I was about to throw - is my master node and I can run it 24 by 7 - I won’t have any use of it other than that. The chromebook which my wife uses and roams around the apartments can silently run a worker node. The third laptop might be used for HA for the master node and my PC is for bursting - if I need a number of worker nodes on a certain day, I can create them and destroy them on demand. And there is a possibility that I may be able to add my old Android phone to the cluster.

So the best option is to mix and match the actual machine or VMs and it's only possible if you are creating your VMs with Bridged Networking (Bridged networking connects a virtual machine to a network by using the network adapter on the host system. If the host system is on a network, bridged networking is often the easiest way to give the virtual machine access to that network. When you install Workstation on a Windows or Linux host system, a bridged network (VMnet0) is set up for you).

Once Theory is Sorted Out, Let’s Start

The following steps & scripts are tested over Ubuntu-18.04.2. Going forward, I will mention VM/Machine as node.

Step 1

Provision 3 nodes and install ubuntu server on 2 worker nodes and Ubuntu server +GUI on Master node.

I will skip how to install ubuntu over VM or actual hardware - just one Google away. Make sure to install SSH server on each of them and do not create SWAP space.

Ideally, you should be creating a base image of ubuntu server for all the 3 nodes (without GUI)

Complete the following initialization and then clone them to 3 different nodes. Once installation of the OS is completed, run:

Azure-CLI
sudo apt update && sudo apt upgrade (apt-get not require)

If any of the nodes are virtualbox VM, install guest additions, add the guest edition disk to the VM and run:

Azure-CLI
sudo mount /dev/cdrom /mnt

cd /mnt

sudo apt-get install -y dkms build-essential linux-headers-generic linux-headers-$(uname -r)
sudo ./VBoxLinuxAdditions.run

sudo swapoff -a (in case you have swap)

At this point, please take a snapshot and clone the image to 3 different nodes (in case of actual hardware, you might need to install separately), remember to configure each of the nodes with specified number of cores, hard disk size, memory and select bridged adapter.

Snapshots will save you lot of time if anything goes wrong so keep on taking snapshots during initialization of the nodes whenever you think you have reached a milestone.

Step 2

Provision all the nodes, run ifconfig and note the IP:

Make sure you can SSH to all the nodes and each of them can ping each other (earlier, I have explained challenges you might face and solutions).

Make use of any useful terminal manager - which is easier to manage, you have to run lot of commands and need to maintain lots of terminal windows. I use Mobaxterm - really useful.

Step 3 (Applicable to All the nodes)

Each node should know each other by names so you need to think of naming conventions for your nodes - I have the following names, Master Node- k8s-master / Worker Nodes- k8s-node01, k8s-node02…

Check host name (& change if required):

Azure-CLI
sudo nano /etc/hostname (edit host names accordingly save and exit)

sudo nano /etc/hosts

And append following

<master node ip> k8s-master
<node1 ip>       k8s-node01
<node1 ip>       k8s-node02

Save and exit.

Step 4: (Install Docker -applicable to all the Nodes)

Azure-CLI
​​​​​make sure to remove existing docker installation-
sudo apt-get remove docker docker-engine docker.io containerd runc 

sudo apt install \apt-transport-https \ca-certificates \curl \gnupg-agent 
\software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

sudo apt-get update

sudo add-apt-repository 
"deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt-get install docker-ce docker-ce-cli containerd.io

sudo apt-cache policy docker-ce

sudo systemctl status docker (check the status)

sudo systemctl enable docker (enable docker)

Take snapshot!!

Step 5: (Install k8s-applicable to All the Nodes)

Azure-CLI
​​​​​curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

# Configure iptables for Kubernetes

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

sudo apt install -y kubelet kubeadm kubectl

sudo systemctl enable kubelet

Step 6: (Initialize k8s-only in Master Node)

Optional: Install desktop GUI by running sudo apt install ubuntu-desktop (you will need it to access k8s dashboard)

Azure-CLI
sudo kubeadm init --apiserver-advertise-address=<master node ip> 
--pod-network-cidr=192.168.0.0/16

This will provide you with a kubeadm join command copy the same and save it somewhere!

Example: kubeadm join <master node ip>:6443 --token bkffxa.42w0tyswagctogv1 --discovery-token-ca-cert-hash sha256:894039dca5a45491db1090xxxxxxxxxxxxxxxxxxxxxxxx

Note: 192.168.0.0/16 for calico

As a regular user, run the following:

Azure-CLI
mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run watch kubectl get pods --all-namespaces highlighted pods will be up and running you won’t have calico or dashboard running and cordns won't be active. If that is the case, take a snapshot. Because next, we are going to install calico for k8s networking.

Run:

Azure-CLI
kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml

You will see calico pods are getting created.

Step 7: (initialize k8s-dashboard)

Let’s install dashboard by running:

Azure-CLI
kubectl apply -f 
https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/
src/deploy/recommended/kubernetes-dashboard.yaml

You need a service account to access the k8s dashboard

kubectl --namespace kube-system create serviceaccount k8s-dadmin 
(k8s-dadmin name of the service account)

kubectl create clusterrolebinding k8s-dadmin 
--serviceaccount=kube-system:k8s-dadmin --clusterrole=cluster-admin

Run following command to generate the service account token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | 
grep k8s-dadmin | awk '{print $1}')

This will print a long token save it you will need it to login to dashboard

Run kubectl proxy which will start the dashboard and you can access http://localhost:8001.

Later, you can access actual dashboard portal by going to:

It will ask you for token.

Copy and paste the service account token you have saved earlier and paste it here, you will be able to login to dashboard.

Step 8: (Domain Join All the Worker Nodes)

Run the kubeadm join command you have been provided with in Step 6.

Azure-CLI
kubeadm join <master node ip>:6443 --token bkffxa.42w0tyswagctogv1 
--discovery-token-ca-cert-hash sha256:894039dca5a45491db1090xxxxxxxxxxxxxxxxxxxxxxxx

Go back to master node, access the dashboard or run kubectl get nodes.

I have said this is not a server grade configuration, this is your workstation, so you may need to shutdown all your nodes when restarting - if you faced an issue, run the following commands in master node and you will be able to make your cluster work!

Azure-CLI
After re-start
sudo -i
swapoff -a
Exit
strace -eopenat kubectl version

History

  • 8th June, 2019: Initial version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
United States United States
11+ yrs. of Architecture, Application Development and implementation experience in Microsoft Technologies.Involved in various stages of System Development Life Cycle including Analysis, Design, Development, and Maintenance.A technology enthusiast with proven ability to direct a wide range of technology & business critical projects and complex deployments.

Windows Azure Experience-
5+ years of experience on designing and implementing windows cloud (Azure) based applications & solutions. Defined cloud adoption road maps and runways for various clients, help to initialize adoption strategies based on existing governance, security & compliance policies.

Hands On - Azure IaaS, PaaS, storage, network and database.
Azure PaaS services like web sites, web/worker roles, SQL. Azure database, storage, service bus. Understanding of security requirements for cloud. Highly availability, DR solutions in Azure. Assessing applications for cloud adoption. Azure AD, AD Connect Claim based authentication with ACS.

High Performance Computing-HPC
Hands on experience of development and setting up HPC grid & cluster on azure as well as on premise to cloud hybrid mode with Microsoft HPC pack 2012 R2.
Provides consulting support to clients on HPC resources, which can include porting, palatalization, tuning, using software libraries, storage management, and optimization.
Collaborates with systems staff on the design, configuring, and support of a heterogeneous mix of multi-core compute clusters. Manages diagnosis and correction of applications issues causing poor performance; develops benchmarks.
Maintains close learning and working relationships with industry specialists, vendors, communities, working groups, etc. Collaborates with information technology staff to improve the performance of existing HPC resources.

Comments and Discussions

 
-- There are no messages in this forum --