Running Containers in the cloud with the Linode Kubernetes Engine

Kubernetes is a very powerful platform to scale your applications, as the lower resource usage of containers can give you greater efficiency. The Linode Kubernetes Engine allows you to easily deploy containers in the cloud, eliminating the need for you to maintain your own hardware for your Kubernetes stack. In this video, we’ll explore the Linode Kubernetes Engine, and walk through an example of not only deploying a pod, but also setting up persistent storage as well.

YouTube player

kubectl commands

Apply a yaml file:

kubectl apply -f file.yml

Check running pods:

kubectl get pod

Check the status of a particular pod:

kubectl get pod thelounge

Get more information on a particular pod:

kubectl describe pod thelounge

Check the status of worker nodes:

kubectl get nodes
thelounge_pod.yml
 apiVersion: v1
 kind: Pod
 metadata:
   name: thelounge
   labels:
     app: irc
 spec:
   containers:
     - name: thelounge
       image: linuxserver/thelounge
       ports:
         - containerPort: 9000
           name: "irc-http"
load_balancer.yml
 apiVersion: v1
 kind: Service
 metadata:
   name: thelounge
   annotations:
     service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
   labels:
     app: irc
 spec:
   type: LoadBalancer
   ports:
   - name: http
     port: 80
     protocol: TCP
     targetPort: irc-http
   selector:
     app: irc
   sessionAffinity: None
Apply the load_balancer.yml file
kubectl apply -f load_balancer.yml

Check the status with:

kubectl get pod
thelounge_pvc.yml
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: thelounge-pvc
 spec:
   accessModes:
   - ReadWriteOnce
   resources:
     requests:
       storage: 10Gi
   storageClassName: linode-block-storage
Apply the persistent volume claim file
kubectl apply -f thelounge_pvc.yml

Check the status with:

kubectl get service

Support Linux Learning