Using CoreDNS for LoadBalancer addresses
I’d like to be able to access my load-balanced services by name
(docker.k3s.differentpla.net
, for example) from outside my k3s cluster. I’m
using --addn-hosts
on dnsmasq on my router.
This is fragile. Every time I want to add a load-balanced service, I need to edit
the additional hosts file on my router, and I need to restart dnsmasq.
I’d prefer to forward the .k3s.differentpla.net
subdomain to another DNS
server, by using the --server
option to dnsmasq. This means that I don’t need
to touch my router once the forwarding rule is configured.
Kubernetes already provides CoreDNS for service discovery, so I’m going to use another instance of that.
Deployment
deployment.yaml
looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: k3s-dns
name: k3s-dns
namespace: k3s-dns
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
app: k3s-dns
template:
metadata:
labels:
app: k3s-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: rancher/coredns-coredns:1.8.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: config-volume
configMap:
name: k3s-dns
items:
- key: Corefile
path: Corefile
- key: NodeHosts
path: NodeHosts
defaultMode: 420
It’s a slightly trimmed copy of the original CoreDNS deployment, which I got with the following command:
$ kubectl --namespace kube-system get deployment coredns -o yaml > kube-system-coredns.yaml
The following things are of interest:
- I changed a bunch of the names.
- It’s got
livenessProbe
andreadinessProbe
sections that we’ve not used before. These use the CoreDNShealth
andready
plugins. We’ll see these in the ConfigMap later. - It exposes port 53 (standard DNS) on both UDP and TCP, but also Prometheus on port 9153, using the CoreDNS
prometheus
plugin. - It sets some resource limits and requests. Resource limits are used to constrain runaway services. Resource requests are used to work out whether it’ll fit on a particular node or not.
- The original deployment had a service account. It’s used by the
kubernetes
plugin to enumerate service endpoints. We’re not using it, so I removed all mention of it from the deployment. - It uses a ConfigMap to specify two different configuration files.
ConfigMap
The deployment mounts a ConfigMap as a volume:
...
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
volumes:
- name: config-volume
configMap:
name: k3s-dns
items:
- key: Corefile
path: Corefile
- key: NodeHosts
path: NodeHosts
...
Specifically, it mounts config-volume
under /etc/coredns
. The config-volume
volume is a configMap
, named k3s-dns
.
This refers to a named ConfigMap
; configmap.yaml
looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: k3s-dns
namespace: k3s-dns
data:
Corefile: |
k3s.differentpla.net:53 {
errors
health
ready
hosts /etc/coredns/NodeHosts {
ttl 60
reload 15s
fallthrough
}
cache 30
loop
reload
loadbalance
}
NodeHosts: |
192.168.28.11 nginx.k3s.differentpla.net
192.168.28.12 docker.k3s.differentpla.net
Service
svc.yaml
looks like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: k3s-dns
name: k3s-dns
namespace: k3s-dns
spec:
type: NodePort
ports:
- port: 53
name: dns-tcp
protocol: TCP
targetPort: 53
nodePort: 32053
- port: 53
name: dns
protocol: UDP
targetPort: 53
nodePort: 32053
selector:
app: k3s-dns
It’s a NodePort
service, because it needs to be accessible from my router (i.e. outside the cluster).
It can’t be a LoadBalancer
service, because that doesn’t support multiple protocols with a single load-balancer.
I’m going to have my router forward to rpi401
, the control-plane node, because if that’s down, everything’s down.
Does it work?
$ kubectl apply -f deployment.yaml -f configmap.yaml -f svc.yaml
$ kubectl --namespace k3s-dns get all
NAME READY STATUS RESTARTS AGE
pod/k3s-dns-d6769ccc5-sj5gr 1/1 Running 0 6m9s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k3s-dns NodePort 10.43.180.89 <none> 53:32053/TCP,53:32053/UDP 33m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/k3s-dns 1/1 1 1 6m9s
NAME DESIRED CURRENT READY AGE
replicaset.apps/k3s-dns-d6769ccc5 1 1 1 6m9s
$ dig +short -p 32053 @rpi401 nginx.k3s.differentpla.net
192.168.28.11
Seems to work, yes. Now to configure the router.
Router Configuration
SynologyRouter:/etc/dhcpd # cat dhcpd-k3s-dns.conf
server=/k3s.differentpla.net/192.168.28.181#32053
SynologyRouter:/etc/dhcpd # cat dhcpd-k3s-dns.info
enable="yes"
SynologyRouter:/etc/dhcpd # /etc/rc.network nat-restart-dhcp
Note: you MUST have two hyphens in the name.
SynologyRouter:/etc/dhcpd # nslookup nginx.k3s.differentpla.net localhost
Server: 127.0.0.1
Address 1: 127.0.0.1 localhost
Name: nginx.k3s.differentpla.net
Address 1: 192.168.28.11