Building a 10 Node Raspberry Pi Kubernetes Cluster

Have you thought about setting up your very own Kubernetes cluster consisting of multiple Raspberry Pi’s? It’s not as hard as it sounds, and in this video, I’ll show you how to set it up. Although this video will show the process of creating a ten node cluster, you don’t have to have 10 nodes – as long as you have at least two, you’ll be all set. By the end of the video, you’ll have your very own Kubernetes cluster that will be ready to go for my upcoming Kubernetes tutorial series.

YouTube player

What You’ll Need

  • At least two Raspberry Pi 4 boards
  • A Raspberry-certified power supply for each
  • An SD card with decent speed
  • SD card flashed with Ubuntu 20.04 for each
  • Full parts list (commission earned)

Initial setup (do the following on each Raspberry Pi)

Create a user for yourself

Note: this is optional, you can continue using the ‘ubuntu’ user. But you can also create a more specific account for yourself if you want to.

sudo adduser jay
usermod -aG sudo jay

Edit the host name

Edit /etc/hosts and /etc/hostname on the SD card to the actual name of the instance

For example:

k8s-master
k8s-master

Install all updates

sudo apt update && sudo apt dist-upgrade

Configure boot options

Edit /boot/firmware/cmdline.txt and add:

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1

Note: Add that to the end of the first line, do not create a new line.

Reboot each Pi:

sudo reboot

Setting up Docker

Install Docker

curl -sSL get.docker.com | sh

Optional: Add your user to the docker group:

sudo usermod -aG docker jay

Set Docker daemon options

Edit the daemon.json file (this file most likely won’t exist yet):

sudo nano /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

Enable routing

Find the following line in the file:

/etc/sysctl.conf

#net.ipv4.ip_forward=1

Uncomment that line.

Reboot again

sudo reboot

Test that docker is working properly

Check docker daemon:

systemctl status docker

Run the hello-world container:

docker run hello-world

Setting up Kubernetes (initial setup, perform on each Pi)

Add Kubernetes repository

sudo nano /etc/apt/sources.list.d/kubernetes.list

Add:

deb http://apt.kubernetes.io/ kubernetes-xenial main

Add the GPG key to the Pi:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Install required Kubernetes packages

Refresh the package index:

sudo apt update

Note: If the repository refresh fails on a node, wait a few minutes, then try it again

Install the initial Kubernetes packages:

sudo apt install kubeadm kubectl kubelet

Note: If you get errors with the first command, wait a few minutes and try again.

Setting up Kubernetes (Setting up the Master node)

Initialize Kubernetes

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Once this runs, you will get some output that will include the join command, but don’t join nodes yet. Copy this somewhere for later.

Set up config directory

The previous command will give you three additional commands to run, most likely these:

mkdir -p ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Go ahead and run those, but if it recommends different commands, run those instead.

Install flannel network driver

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Note: The lack of sudo is intentional

Make sure all the pods come up

kubectl get pods --all-namespaces

Join worker nodes to the cluster

Once all of the pods have come up, run the join command on each worker node. This command was provided in an earlier step.

Check status of nodes

See if the nodes have joined successfully, run the following command a few times until everything is ready:

kubectl get nodes

Running an NGINX container on your cluster

pod.yml file for the NGINX example

Save the following as “pod.yml” in your current working directory:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-example
  labels:
    app: nginx
spec:
  containers:
    - name: nginx
      image: linuxserver/nginx
      ports:
        - containerPort: 80
          name: "nginx-http"

service-nodeport.yml file for the NGINX example

Save the following as “service-nodeport.yml” in your current working directory:

apiVersion: v1
kind: Service
metadata:
  name: nginx-example
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      nodePort: 30080
      targetPort: nginx-http
  selector:
    app: nginx

Apply the pod yaml file

kubectl apply -f pod.yml

Check the status with:

kubectl get pods

Check the status with more info:

kubectl get pods -o wide

Apply the service yaml file

kubectl apply -f service-nodeport.yml

Check the status with:

kubectl get service

Test availability

Now that you’ve created both the pod and the service, you should be able to access the NGINX container an IP from any node in the cluster.

Delete a Pod

kubectl delete pod nginx-example

Delete a service

kubectl delete service nginx-example

pod.yml file for the Smokeping example

apiVersion: v1
kind: Pod
metadata:
  name: smokeping-example
  labels:
    app: smokeping
spec:
  containers:
    - name: smokeping
      image: linuxserver/smokeping
      volumeMounts:
        - mountPath: /config
          name: smokeping-data
      ports:
        - containerPort: 80
          name: smokeping-http
  volumes:
    - name: smokeping-data
      hostPath:
        path: /data

service-nodeport.yml file for the Smokeping example

apiVersion: v1
kind: Service
metadata:
  name: smokeping-example
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      nodePort: 30080
      targetPort: smokeping-http
  selector:
    app: smokeping

Check out the Shop!

Support Linux Learning and get yourself some cool Linux swag!


Support LearnLinuxTV and receive 5% off an LPI exam voucher!