K3s: You must be logged in to the server (Unauthorized)
This afternoon, I couldn’t run kubectl get namespaces
against my K3s cluster. Instead, I got an Unauthorized
error.
This afternoon, I couldn’t run kubectl get namespaces
against my K3s cluster. Instead, I got an Unauthorized
error.
Given the problems I had when I last upgraded everything on my K3s cluster, I’m going to put a runbook together for doing it “properly”.
It turns out that Traefik has a dashboard. Here’s how to access it via kubectl port-forward
.
I want visitors to http://home.k3s.differentpla.net
to be redirected to https://home.k3s.differentpla.net
. Here’s
how to set that up on K3s, using Traefik middlewares.
On December 14th, 2022 at approximately 10:15, Kubernetes wiped the persistent volumes of a number of applications in the K3s cluster in my homelab. This is the incident review.
I’ve got an Electric Imp Environment Tail in my office. It monitors the temperature, humidity and pressure. Currently, to display a graph, it’s using flot.js and some shonky Javascript that I wrote. It remembers samples from the last 48 hours.
The documentation for VictoriaMetrics is a bit of a mess, so here’s what worked for me.
Using ArgoCD CLI:
ArgoCD provides a web interface and a command line interface. Let’s install the CLI.
As I add more things to my k3s cluster, I find myself wishing that I had a handy index of their home pages. For example, I’ve got ArgoCD and Gitea installed. I probably want to expose the Longhorn console, and the Kubernetes console. I think Traefik has a console, too. I’ll also be adding Grafana at some point soon.
This afternoon, I fired up my k3s cluster for the first time in a while. When I ran apt update
, I got an error message
about a missing Release file.
While debugging pod DNS problems, I discovered that CoreDNS allows
customization by importing extra zone files from a config map. I’m going to use that to forward queries for k3s.differentpla.net
to my custom CoreDNS instance.
I’ve got an extra instance of CoreDNS running in my cluster, serving
*.k3s.differentpla.net
, with LoadBalancer and Ingress names registered in it, and it’s working fine for queries to
the cluster. It’s not working fine for queries inside the cluster. What’s up with that?
I’d like to run Livebook on my cluster. Here’s how I went about doing that.
Because I’m running my k3s cluster on Raspberry Pi 4 nodes, and they’re ARM-64 (aarch64), I keep running into problems where applications are compiled for x86_64 (amd64) and don’t run.
I’m in the middle of installing ArgoCD (blog post will appear later). Rather than use up another LoadBalancer IP address for it (and mess around with TLS), let’s talk about using an Ingress. It’s entirely possible that I can convert the previously-installed docker registry and Gitea to use one as well.
If you’re using a NodePort service, and it has multiple replicas, how does it know which replica to use?
Wherein I finally bring together all we’ve learned so far and stand this thing up properly.
Installation with Helm, per https://longhorn.io/docs/1.2.3/deploy/install/install-with-helm/.
Having reinstalled all of my nodes with Ubuntu, I need to go back and install k3s. Joy.
Persistent Volume Claims are namespace-scoped. Persistent Volumes are not:
Pushing a simple node.js-based image to my private docker registry failed.
Until just now, I didn’t get NodePort
services.
In the previous post, we succeeded in giving our docker registry some persistent storage. However, it used (the default) dynamic provisioning, which means we don’t have as much control over where the storage is provisioned.
We need to give our Docker registry some persistent storage. Currently, if we restart it, it loses its stored data.
We need to give our Docker registry some persistent storage. Currently, if we restart it, it loses its stored data.
Let’s see if we can push an image to our new Docker Registry.
The default option for persistent volumes on k3s is local-path
,
which provisions (on-demand) the storage on the node’s local disk. This
has the unfortunate side-effect that the container is now tied to that
particular node.
This morning, I grabbed my Raspberry Pi cluster out of the box and fired it up again.
About 2 years ago, I spent some time messing around with k3s on a cluster made from 5x Raspberry Pi 2 Model B nodes.
Note: This is basically the same as the node.js server.
Now that I’ve successfully run nginx
on my cluster, I’m going to do the same with a simple node.js
server.
Start nginx, with 3 replicas:
It’s at this point that I diverge from Scott’s blog post; he installs full-fat Kubernetes. I’m going to use k3s.
I downloaded Raspbian Buster Lite from the official site and wrote it to the SD cards (using dd
).
I had a bunch of hardware lying around from an earlier abandoned side project: