How to Build an Awesome Kubernetes Cluster using Proxmox Virtual Environment

Proxmox Virtual Environment is an awesome virtualization solution. Kubernetes is an awesome containerization solution. So why not combine those great technologies? In this video, you’ll see the entire process of setting up your very own Kubernetes cluster from scratch, with Proxmox shown as the platform. But even if you don’t plan on using Proxmox, this method will work just fine on other solutions as well, including physical servers. By the end of this video, you’ll have your own cluster ready to go!

YouTube player

Also, be sure to check out the brand-new book, Mastering Ubuntu Server 4th Edition! It’s available now, and covers Ubuntu Server 22.04 LTS.

Prerequisites

To follow along with this tutorial, you’ll need:

  • At least two Ubuntu Server 22.04 instances (one for the controller, one for at least one node)
  • A static IP address or DHCP reservation on those instances, so their IP addresses cannot change
  • The controller node should have at least 2GB of RAM and 2 cores
  • The node instances can have a single gigabyte of RAM and a single CPU core (or more if you prefer)

Note: This tutorial utilizes Ubuntu Server 22.04. If you choose to use a different distribution or a different version of Ubuntu, these commands may not work.

Setting up the cluster

The following commands are to be run on the controller node (unless otherwise stated).

Setting up a static IP

Note, a static lease is preferred, but you can use the following Netplan config example to serve as a basis of yours if you need to set up a static IP.

network:
    version: 2
    ethernets:
        eth0:
            addresses: [10.10.10.213/24]
            nameservers:
                addresses: [10.10.10.1]
            routes:
                - to: default
                  via: 10.10.10.1

Installing containerd

This Kubernetes cluster will utilize the containerd runtime. To set that up, we’ll first need to install the required package:

sudo apt install containerd

Create the initial configuration:

sudo mkdir /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

For this cluster to work properly, we’ll need to enable SystemdCgroup within the configuration. To do that, we’ll need to edit the config we’ve just created:

sudo nano /etc/containerd/config.toml

Within that file, find the following line of text:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]

Underneath that, find the SystemdCgroup option and change it to true, which should look like this:

SystemdCgroup = true

Disable swap

Kubernetes requires swap to be disabled. To turn off swap, we can run the following command:

sudo swapoff -a

Next, edit the /etc/fstab file and comment out the line that corresponds to swap (if present). This will help ensure swap doesn’t end up getting turned back on after reboot.

Enable bridging

To enable bridging, we only need to edit one config file:

sudo nano /etc/sysctl.conf

Within that file, look for the following line:

#net.ipv4.ip_forward=1

Uncomment that line by removing the # symbol in front of it, which should make it look like this:

net.ipv4.ip_forward=1

Enable br_netfilter

The next step is to enable br_netfilter by editing yet another config file:

sudo nano /etc/modules-load.d/k8s.conf

Add the following to that file (the file should actually be empty at first):

br_netfilter

Reboot your servers

Reboot each of your instances to ensure all of our changes so far are in place:

sudo reboot

Installing Kubernetes

The next step is to install the packages that are required for Kubernetes. First, we’ll add the required GPG key:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://
packages.cloud.google.com/apt/doc/apt-key.gpg

After that, install the packages that are required for Kubernetes:

sudo apt install kubeadm kubectl kubelet

Controller node only: Initialize our Kubernetes cluster

So long as you have everything complete so far, you can initialize the Kubernetes cluster now. Be sure to customize the first IP address shown here (not the second) and also change the name to match the name of your controller.

sudo kubeadm init --control-plane-endpoint=172.16.250.216 --node-name
controller --pod-network-cidr=10.244.0.0/16

After the initialization finishes, you should see at least four commands printed within the output.

Setting up our user account to manage the cluster

Three commands will be shown in the output from the previous command, and these commands will give our user account access to manage our cluster. Here are those related commands to save you from having to search the output for them:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
suod chown $(id -u):$(id -g) $HOME/.kube/config

Those commands should allow you to manage the cluster, without needing to use the root account to do so.

Install an Overlay Network

The following command will install the Flannel overlay network (an overlay network is required for this to function).

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/
Documentation/kube-flannel.yml

Adding Nodes

The join command, which you will receive from the output once you initialize the cluster, can be ran on your node instances now to get them joined to the cluster. The following command will help you monitor which nodes have been added to the controller (it can take several minutes for them to appear):

kubectl get nodes

If for some reason the join command has expired, the following command will provide you with a new one:

kubeadm token create --print-join-command

Deploying a container within our cluster

Create the file pod.yml with the following contents:

apiVersion: v1
kind: Pod
metadata:
Chapter 18 25
 
name: nginx-example
  labels:
    app: nginx
spec:
  containers:
    - name: nginx
      image: linuxserver/nginx
      ports:
        - containerPort: 80
          name: "nginx-http"

Apply the file with the following command:

kubectl apply -f pod.yml

You can check the status of this deployment with the following command:

kubectl get pods

Creating a NodePort Service

Setting up a NodePort service is one of the methods we can use to be able to access the container from outside the pod network. To set this up, first create the following file as service-nodeport.yml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-example
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      nodePort: 30080
      targetPort: nginx-http
  selector:
app: nginx

You can apply that file with the following command:

kubectl apply -f service-nodeport.yml

To check the status of the service deployment, you can use the following command:

kubectl get service

And now, you have your very own Kubernetes cluster, congratulations!

Notable Replies

  1. That’s a full tutorial, nice.

  2. I’m going to make use of your blog post to create a k8s template for anything not specifically for the controller and be able to spin up multiple cluster nodes from the k8s template. It will be a whole lot easier down the road. Templates will save me a lot of copying and pasting and/or typing for me.

    Thank you for an awesome tutorial.

  3. In line 336 of the article, I came across the two lines of code:

    <pre class="wp-block-code"><code>sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://
    packages.cloud.google.com/apt/doc/apt-key.gpg</code></pre>
    

    In Firefox, that translates to the following in the inspector:

    <pre class="wp-block-code">
      <code>
        sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://
    packages.cloud.google.com/apt/doc/apt-key.gpg
      </code>
    </pre>
    

    When I copied the command from the article without looking at the HTML source code, this resulted in the following:

    gpadmin-local@uk8s-template:~$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://
    packages.cloud.google.com/apt/doc/apt-key.gpg
    [sudo] password for gpadmin-local: 
    curl: (3) URL using bad/illegal format or missing URL
    -bash: packages.cloud.google.com/apt/doc/apt-key.gpg: No such file or directory
    

    If the command inside the code can all be in one line, that might help solve the problem.

    Also:

    gpadmin-local@uk8s-template:~$ sudo apt install kubeadm kubectl kubelet
    [sudo] password for gpadmin-local: 
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    
    No apt package "kubeadm", but there is a snap with that name.
    Try "snap install kubeadm"
    
    
    No apt package "kubectl", but there is a snap with that name.
    Try "snap install kubectl"
    
    
    No apt package "kubelet", but there is a snap with that name.
    Try "snap install kubelet"
    
    E: Unable to locate package kubeadm
    E: Unable to locate package kubectl
    E: Unable to locate package kubelet
    

    At least snaps have all the packages needed, so that gpg key is no longer needed?

    Update: The echo "deb ..." command is only in the video and not in the blog post.

    Update 2: Here is the command for adding a repository:

    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    

    Hopefully I can save anyone from having to type from the video.

    Update 3: This is what I copied from the blog and changed it to Home Assistant:

    apiVersion: v1
    kind: Pod
    metadata:
    Chapter 18 25
    
    name: homeassistant
      labels:
        app: homeassistant
    spec:
      containers:
        - name: homeassistant
          image: linuxserver/homeassistant
          ports:
            - containerPort: 8123
              name: "home-assistant"
    

    Applying the configuration produces the result:

    error: error parsing pods.yml: error converting YAML to JSON: yaml: line 6: could not find expected ':'
    

    I changed it to the following according to the video and it works:

    apiVersion: v1
    kind: Pod
    metadata:
      name: homeassistant
      labels:
        app: homeassistant
    spec:
      containers:
        - name: homeassistant
          image: linuxserver/homeassistant
          ports:
            - containerPort: 8123
              name: "home-assistant"
    

    And everything is ready to go. I went ahead and created a service file:

    apiVersion: v1
    kind: Service
    metadata:
      name: homeassistant
    spec:
      type: NodePort
      ports:
        - name: hass
          port: 8123
          nodePort: 30812
          targetPort: home-assistant
      selector:
        app: homeassistant
    

    Once I applied the service to the cluster, I now have Home Assistant running! :smiley:

    I have created my first Kubernetes cluster and my first container! :smiley:

Continue the discussion at community.learnlinux.tv

Participants

Avatar for system Avatar for GraysonPeddie Avatar for ThatGuyB