Upgrading k3s
Control Plane
This is a non-HA cluster (I’ve only got one server node), and kubectl drain rpi401
(the server) doesn’t want to evict
pods with local storage. So I guess I just apply the upgrade and hope that it recovers after a restart:
sudo apt update
sudo apt upgrade
curl -sfL https://get.k3s.io | sh - # but see below, re: Klipper
Disabling Klipper
A few hours later, I realised that I forgot to disable Klipper. I’ve not gone back and tried again, but based on the documentation (see below for links), I’m going to assume that the following is the correct incantation for the installation script:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable servicelb" sh -
- Disabling Klipper
- Rancher Docs: Networking: Disabling the Service LB
- Rancher Docs: Installation Options: INSTALL_K3S_EXEC
The basic plan is to drain each node in turn, apply the upgrade and then uncordon the node. I’m upgrading from v1.21.7+k3s1
to v1.22.5+k3s1
.
Worker Nodes
On the master:
kubectl drain rpi405 \
--ignore-daemonsets \
--pod-selector='app!=csi-attacher,app!=csi-provisioner' # because of Longhorn
On the worker (rpi405, in this example):
sudo apt update
sudo apt upgrade
curl -sfL https://get.k3s.io | K3S_URL=https://rpi401:6443 K3S_TOKEN=K... sh -
The above command line was in bash history, so I didn’t need to dig out the token. It’s in /var/lib/rancher/k3s/server/node-token
on the server node, if you need it.
On the master:
kubectl uncordon rpi405
Repeat for each worker.
error: cannot delete Pods
I had a spare, temporary, busybox node kicking around (from kubectl run
):
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/busybox
Confirm that it’s unused, then just delete it:
kubectl delete pod busybox