Ubuntu 22.04 and Kubernetes recently Broke Compatibility with Each Other (and how to work around it)

Here’s another blog post today, that I’m creating for the same reason as the previous one. It took me a bit longer than I’d like to admit to figure this out, and if anyone else out there is wondering why their automated Kubernetes builds on Ubuntu 22.04 started failing on them suddenly for no apparent reason. Specifically, your Kubernetes cluster builds started failing on December 9th. (You literally can’t make this stuff up). So, after troubleshooting for countless hours I finally figured it out. I mentioned it to Jeff Geerling (yes THAT Jeff Geerling) and he mentioned I should write a blog post, in case it may help someone else. I figured that his suggestion was logical 🖖, so here it is.

What’s the problem I’m referring to? If you’re attempting to initialize a Kubernetes cluster on Ubuntu 22.04 and you see error messages that include output such as this:

CRI v1 runtime API is not implemented for endpoint

Or maybe even this:

unknown service runtime.v1.RuntimeService

Continue reading and I’ll let you know what the issue is, and how to fix it. I’ll also sneak in a quantum science reference and it’s going to be a good time.

So, about that Kubernetes initialization issue on Ubuntu – what’s going on, and what’s the fix? Well, here’s the shorter version for all of you that are ADHD’ers (like me):

If you’re running Ubuntu 22.04 on your server, and you use containerd as your container runtime, then the newly released Kubernetes 1.26 won’t work. At the very least, it doesn’t work at this very moment. You’ll have to wait for Canonical to fix this in their repository (if they do), or you can use a third party repository to get a newer version of containerd. You didn’t see this coming? Yeah, it blindsided me too.

If you’re still reading, then that means you either don’t have ADHD and this article actually has you intrigued. Or, you do have ADHD and this article triggered your hyper-focus. Either way, yay.

Here’s what has me amused – this is the perfect example of the strange situations that homelabbers, Linux and DevOps people (you know, the cool people) sometimes run into. You’re troubleshooting an issue that seems straightforward but turns out not to be, and the culprit ends up being the last thing you thought of but you honestly could’ve checked first (but not one single person out there would have).

I was in the process of automating Kubernetes cluster builds via Ansible. (You know, the last thing anyone who values their time would ever casually consider doing on a Monday afternoon). But it genuinely seemed like it would be a fun project – and to be fair, I did enjoy it. In a short amount of time, I had a cluster automation built. It worked 100% of the time. I tested it over and over. It was perfect. I ordered pizza.

Until it wasn’t perfect. With no changes to my code at all, the automated cluster builds started failing. On their own. Literally. This is the type of build issue that shouldn’t be able to exist. Nothing changes without something else changing it. But it changed on it’s own. The builds started failing despite me making no changes to the code at all. A technical issue that literally breaks quantum science, it changed state without being observed!

But… as much as I would love that to be the case, this issue isn’t quantum (though it did end up being elementary).

On December 9th, Kubernetes 1.26 was released. And you know what? That just so happens to be in the middle of the exact time period I was building my cluster automation in Ansible. So when I started building it, the builds were pulling the “then most recent but now older” version of Kubernetes. On December 9th, my builds started pulling in Kubernetes 1.26 packages since the instruction from Ansible was to install the latest.

Having the latest version of Kubernetes isn’t a bad thing. However, Kubernetes 1.26 breaks compatibility with containerd earlier than 1.6.0, which means that Ubuntu 22.04 is out of the picture here since (last I checked) it includes containerd version 1.5.9. For a lot of people, they’ll pull containerd from a repository somewhere and probably already have the latest version. But I don’t like to use third-party repositories unless I have to. And now, we do have to, because that’s the only way to get Kubernetes working.

Anyway, all you have to do is make sure you’re running a newer containerd version and you’ll probably need to use a third-party repository in order to get a hold of containerd 1.6.0 or newer. That is, unless Ubuntu surprises us and makes an exception for including containerd 1.6.0 in 22.04. I used the instructions here to set up a repository in order to get a hold of that new containerd that’s all the buzz.

Hopefully this helps someone out there that was frantically searching Google, wondering why their Kubernetes clusters stopped building all of a sudden. Now, you know.

Notable Replies

  1. I was checking why I didn’t run into this issue, but I am using containerd.io from docker in stead of the ubuntu version. You can install it (as you are probably aware) via

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
    sudo add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu focal stable"
    sudo apt-get update && sudo apt-get install -y containerd.io

    I had this installed with my original kubernetes installation from ubuntu 20.04, which I upgraded later to ubuntu 22.04.

Continue the discussion at community.learnlinux.tv