Installing Gitea on k3s
I want to play with GitOps on my k3s cluster (specifically ArgoCD). To do that, I’m going to need a local git server. I decided to use Gitea.
Add the helm repo
helm repo add gitea-charts https://dl.gitea.io/charts/
helm repo update
Create a namespace
kubectl create namespace gitea
Install postgres
I’ll run a separate instance of postgres specifically for Gitea. I’ll run it as a StatefulSet.
statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitea-postgres
namespace: gitea
spec:
selector:
matchLabels:
app: gitea-postgres
serviceName: gitea-postgres
template:
metadata:
labels:
app: gitea-postgres
spec:
containers:
- name: postgres
image: postgres:14
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gitea-postgres-secret
key: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
- name: gitea-initdb
mountPath: /docker-entrypoint-initdb.d/
volumes:
- name: gitea-initdb
configMap:
name: gitea-initdb
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Things of note:
- Use the official postgres image, which has arm64 support, rather than the Bitnami one, which doesn’t.
- The password for the
postgres
user is specified by thePOSTGRESS_PASSWORD
environment variable, which uses a secret (see below). - The data volume uses a PersistentVolumeClaim backed by Longhorn, so that it doesn’t get deleted when the pod is restarted.
- The actual data storage location (specified by the
PGDATA
environment variable) must be a subdirectory of the volume, otherwise PostgreSQL complains. - We’ll create the database by using initialization scripts in a ConfigMap. More on this below.
secret.yaml
I generated a random password with env LC_CTYPE=C tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 10 ; echo
.
It’s stored in the secret base64-encoded; echo -n 'OW1C6o3sW7' | base64
works for that.
apiVersion: v1
kind: Secret
metadata:
namespace: gitea
name: gitea-postgres-secret
type: Opaque
data:
# OW1C6o3sW7
password: T1cxQzZvM3NXNw==
configmap.yaml
If you’re using an external he Gitea database migration scripts don’t create the database, so you need to do that yourself.
The official PostgreSQL container supports initialization scripts.
The Kubernetes documentation gives an example of using a ConfigMap for configuring MySQL, so I modelled the following on that:
apiVersion: v1
kind: ConfigMap
metadata:
name: gitea-initdb
namespace: gitea
labels:
app: gitea-postgres
data:
init-gitea.sh: |
echo "Creating 'gitea' database..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOF
CREATE ROLE gitea WITH LOGIN PASSWORD 'gitea';
CREATE DATABASE gitea WITH OWNER gitea TEMPLATE template0 ENCODING UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF8';
\connect gitea;
CREATE SCHEMA gitea;
GRANT ALL ON SCHEMA gitea TO gitea;
ALTER USER gitea SET search_path=gitea;
EOF
echo "Created database."
Notes:
- You could probably do this with one or more
.sql
scripts instead; they’re also supported as initialization scripts by the postgres docker container. - The Gitea documentation for creating the database doesn’t
mention the
CREATE SCHEMA
, but if you follow the external database example in the Gitea Helm chart documentation, it assumes you’ve done that. - In the above, the former assumes the database is named
giteadb
, the latter assumesgitea
. Pick one; doesn’t matter. - The password for the
gitea
user could probably be passed in as an environment variable from a secret. I didn’t bother.- If an attacker has got this far into my network, my throw-away Gitea instance is the least of my worries.
- Make sure you get the syntax correct, otherwise you’ll run into this problem.
service.yaml
The postgres instance needs to be exposed to the Gitea instance; use a ClusterIP
service for that:
apiVersion: v1
kind: Service
metadata:
name: gitea-postgres
namespace: gitea
labels:
app: gitea-postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: gitea-postgres
Install Gitea
Create values.yaml
# Disable memcached; Gitea will use an internal 'memory' cache.
memcached:
enabled: false
# Disable postgresql; we've already created our own.
postgresql:
enabled: false
# Tell MetalLB that sharing the IP (for HTTP and SSH) is fine.
# I'm not convinced this is necessary -- it seemed to work without it -- but it's in the Gitea docs, so...
service:
ssh:
annotations:
metallb.universe.tf/allow-shared-ip: gitea
# The gitea.config section maps to the app.ini file.
gitea:
config:
server:
DOMAIN: git.k3s.differentpla.net
database:
DB_TYPE: postgres
HOST: gitea-postgres.gitea.svc.cluster.local:5432
USER: gitea
PASSWD: gitea
NAME: gitea
SCHEMA: gitea
Install using the Helm chart
helm install gitea gitea-charts/gitea --namespace gitea --create-namespace --values values.yaml
loadbalancer.yaml
Using MetalLB:
apiVersion: v1
kind: Service
metadata:
labels:
app: gitea
name: gitea
namespace: gitea
spec:
type: LoadBalancer
ports:
- name: gitea-http
port: 80
protocol: TCP
targetPort: 3000
- name: gitea-ssh
port: 22
protocol: TCP
targetPort: 22
selector:
app: gitea
Troubleshooting
Init:CrashLoopBackOff
The gitea-0
pod has multiple init containers. If one of those fails, you’ll see something like the following:
$ kubectl --namespace gitea get all
NAME READY STATUS RESTARTS AGE
pod/gitea-0 0/1 Init:CrashLoopBackOff 5 (50s ago) 5m19s
But if you try to get the logs the normal way, it’ll fail, because the pod is not initialized yet:
$ kubectl --namespace gitea logs gitea-0
Error from server (BadRequest): container "gitea" in pod "gitea-0" is waiting to start: PodInitializing
To get a list of containers in the pod, and their status, you can use kubectl --namespace gitea describe pod gitea-0
.
For more concise output, use jq
, like this:
kubectl --namespace gitea get pod gitea-0 -o json | jq '.status.initContainerStatuses[] | {name, state}'
Once you’ve discovered that it’s the configure-gitea
container that’s failing:
kubectl --namespace gitea logs gitea-0 -c configure-gitea
Can’t connect to HTTP
I initially copy-pasted the LoadBalancer manifest from somewhere else and forgot to change the selector properly. This shows up as “no route to host” errors, which (in this case) means that there’s no pod backing the service.
To figure that out, use kubectl describe service my-service
to list endpoints. If there are none, then the service
isn’t backed by an app, which means that either there are no working pods or they don’t match the selector.
Can’t connect to SSH
Also in the LoadBalancer manifest above, I copy-pasted the gitea-http
entry to create the gitea-ssh
entry, but I
forgot to change targetPort
. This resulted in Connection closed by remote host
errors when attempting to use ssh to
access Gitea.