I had a bunch of hardware lying around from an earlier abandoned side project:
I downloaded Raspbian Buster Lite from the official site and wrote it to the SD cards (using dd
).
It’s at this point that I diverge from Scott’s blog post; he installs full-fat Kubernetes. I’m going to use k3s.
Start nginx, with 3 replicas:
Now that I’ve successfully run nginx
on my cluster, I’m going to do the same with a simple node.js
server.
Note: This is basically the same as the node.js server.
About 2 years ago, I spent some time messing around with k3s on a cluster made from 5x Raspberry Pi 2 Model B nodes.
This morning, I grabbed my Raspberry Pi cluster out of the box and fired it up again.
The default option for persistent volumes on k3s is local-path
,
which provisions (on-demand) the storage on the node’s local disk. This
has the unfortunate side-effect that the container is now tied to that
particular node.
Let’s see if we can push an image to our new Docker Registry.
We need to give our Docker registry some persistent storage. Currently, if we restart it, it loses its stored data.
We need to give our Docker registry some persistent storage. Currently, if we restart it, it loses its stored data.
In the previous post, we succeeded in giving our docker registry some persistent storage. However, it used (the default) dynamic provisioning, which means we don’t have as much control over where the storage is provisioned.
Until just now, I didn’t get NodePort
services.
Pushing a simple node.js-based image to my private docker registry failed.
Persistent Volume Claims are namespace-scoped. Persistent Volumes are not:
Can I upgrade my Raspberry Pi 4-powered k3s cluster to arm64? Without rebuilding everything? tl;dr: no.
My Raspberry Pi 4 cluster is currently 32-bit. It’s got a 32-bit kernel with a 32-bit userland. But I need to run 64-bit software on it. I looked into upgrading it in place, but that’s infeasible. So I need to reinstall it.
Having reinstalled all of my nodes with Ubuntu, I need to go back and install k3s. Joy.
To install other things, we’re going to want to use Helm. So let’s install that first.
Installation with Helm, per https://metallb.universe.tf/installation/#installation-with-helm.
I forgot to disable Klipper, the K3s-provided load balancer.
Installation with Helm, per https://longhorn.io/docs/1.2.3/deploy/install/install-with-helm/.
Wherein I finally bring together all we’ve learned so far and stand this thing up properly.
I’d like to be able to access my load-balanced services by name
(docker.k3s.differentpla.net
, for example) from outside my k3s cluster. I’m
using --addn-hosts
on dnsmasq on my router.
This is fragile. Every time I want to add a load-balanced service, I need to edit
the additional hosts file on my router, and I need to restart dnsmasq.
If you’re using a NodePort service, and it has multiple replicas, how does it know which replica to use?
I recently left my k3s cluster turned off for a week or so. When I turned it back on, the k3s.differentpla.net
DNS wasn’t working. Let’s figure it out and maybe write a runbook for the next time.
I want to play with GitOps on my k3s cluster (specifically ArgoCD). To do that, I’m going to need a local git server. I decided to use Gitea.
I’m in the middle of installing ArgoCD (blog post will appear later). Rather than use up another LoadBalancer IP address for it (and mess around with TLS), let’s talk about using an Ingress. It’s entirely possible that I can convert the previously-installed docker registry and Gitea to use one as well.
My Gitea instance isn’t using TLS, so I’m going to replace the LoadBalancer with an Ingress, which will allow TLS termination.
We’re using ArgoCD at work; time to play with it.
Because I’m running my k3s cluster on Raspberry Pi 4 nodes, and they’re ARM-64 (aarch64), I keep running into problems where applications are compiled for x86_64 (amd64) and don’t run.
I’ve got Gitea installed on my cluster, but it’s currently accessed via HTTP (i.e. no TLS; it’s not secure).
There’s a security fix that needs to be applied; there’s an arm64 release candidate. Time to upgrade ArgoCD.
Up to this point, I’ve been creating and installing certificates manually. Let’s see if cert-manager will make that easier.
I’d like to run Livebook on my cluster. Here’s how I went about doing that.
I’ve got an extra instance of CoreDNS running in my cluster, serving
*.k3s.differentpla.net
, with LoadBalancer and Ingress names registered in it, and it’s working fine for queries to
the cluster. It’s not working fine for queries inside the cluster. What’s up with that?
While debugging pod DNS problems, I discovered that CoreDNS allows
customization by importing extra zone files from a config map. I’m going to use that to forward queries for k3s.differentpla.net
to my custom CoreDNS instance.
Because I like experimenting with Kubernetes from Elixir Livebook, I made the service account into a cluster admin.
This afternoon, I fired up my k3s cluster for the first time in a while. When I ran apt update
, I got an error message
about a missing Release file.
As I add more things to my k3s cluster, I find myself wishing that I had a handy index of their home pages. For example, I’ve got ArgoCD and Gitea installed. I probably want to expose the Longhorn console, and the Kubernetes console. I think Traefik has a console, too. I’ll also be adding Grafana at some point soon.
ArgoCD provides a web interface and a command line interface. Let’s install the CLI.
Using ArgoCD CLI:
The documentation for VictoriaMetrics is a bit of a mess, so here’s what worked for me.
I’ve got an Electric Imp Environment Tail in my office. It monitors the temperature, humidity and pressure. Currently, to display a graph, it’s using flot.js and some shonky Javascript that I wrote. It remembers samples from the last 48 hours.