Using docker with macvlan on Synology NAS
If you use the bridge
or host
network drivers in Docker, containers must use different port numbers to be accessible
on the host network. Here’s how to use the macvlan
driver to assign a unique IP address to each container, allowing
containers (and the host) to use the same port numbers.
Background
By default, when you start a docker container, it uses the bridge
network driver, which means that only published
ports are accessible, and they’re mapped from the host to the container. If instead, you use the host
network driver,
container ports are host ports.
But this means that the container and the host – or two containers – can’t use the same port number. This means that you can’t have two webservers both running on port 80 (or port 443). It means that if the host already runs an SSH daemon on port 22, you can’t also access a container’s SSH daemon on port 22.
Motivation
I want to self-host Forgejo on my home network. For various reasons, I don’t want to run it in my K3s cluster. I’ve got a Synology DS923+, which can run Docker containers, so that seems like a good option. If I use Container Manager, I can set up the Postgres container and the Forgejo container fairly easily. However, because the NAS is already listening on port 22, I would have to expose Forgejo’s SSH daemon on a different port (2222, e.g.).
That seems untidy to me, so I started looking at other options.
Investigation
It’s been a while since I had to do any of this – Kubernetes just deals with it, but it turns out that the relevant
magic is the macvlan
network driver. Armed with a relevant search term, I found the following:
- SynoForum: How to attach container to dedicated interface?
- Creating macvlan in Synology NAS
- Configuring Traefik on Synology DSM7 using docker macvlans (linked from the previous page)
- I didn’t really make use of this one, but I’m linking to it here, because I suspect I’ll want to run Traefik or similar as an ingress/reverse proxy at some point.
Experimenting (not-Synology)
Messing around with network configurations is kinda dangerous if the only way to access the box is over the network, so I did some experimenting on a Linux box I had sitting on my “project” desk. If I screwed it up, it’s got a monitor and keyboard attached, so fixing it would be easier than resetting the NAS.
First, I installed Docker Engine (I’m using Ubuntu) by following the instructions.
Docker’s macvlan driver wants an IP address range expressed in CIDR format. When I configured my network, I didn’t think of that, so it’s not neatly divided into CIDR-sized chunks. For example, my DHCP server hands out 192.168.28.100 - 192.168.28.254, and my K3s cluster is using 192.168.28.60 - 192.168.28.90. Neither of these line up with CIDR subnets, so I had to revisit that.
It should be noted that my network isn’t actually divided into CIDR-sized subnets – it’s all a single /24 network. But if docker wants a network specified as a /28 or a /29, it’s just easier if I line everything up like that.
I found a handy Visual Subnet Calculator that allowed me to divide my /24 into CIDR-sized chunks. The results of that are here.
I moved the DHCP range to 192.168.28.96 - 192.168.28.254 (which spans a /27 and a /25), and I moved the K3s cluster to 192.168.28.48 - 192.168.28.63 (a /28). Both of these ranges encompass their corresponding old range, so nothing should break.
In the end, for macvlan, I opted for a /28, giving me 14 IP addresses to play with. That should be plenty.
Parent network interface
The macvlan
driver requires a “parent” network interface. You can find this by running sudo ip link show
. On my Ubuntu PC, this is eno1
.
Create macvlan docker network
sudo docker network create \
--driver macvlan \
--subnet 192.168.28.0/24 \
--gateway 192.168.28.1 \
--ip-range 192.168.28.64/28 \
--aux-address 'host=192.168.28.78' \
--opt parent=eno1 \
macvlan0
This tells docker to create a network, using the macvlan
driver (--driver macvlan
).
- The
--subnet
and--gateway
arguments are those of my home network. - The
--ip-range
option is the range of addresses reserved for the docker network. In this case it’s192.168.28.64/28
. Using a/28
gives me 16 addresses (14 hosts). - The
--aux-address
option excludes an IP address from being used in the docker network. The address specified here will be used for the host; see below. - The
--opt parent=eno1
option attaches this network to theeno1
interface connected to the local network, identified earlier. - The network is named
macvlan0
. We’ll use this name later, when starting containers. The name doesn’t particularly matter, but calling itmacvlan
and adding a number seems like a good choice. I can’t see needing more than one or two networks.
Configuring the network
To configure the network interface device, you also need to run the following commands:
# create the macvlan device; note the 'macvlan0' and 'eno1' from earlier.
sudo ip link add macvlan0 link eno1 type macvlan mode bridge
# give the host an IP address; it's the highest available in the range.
sudo ip addr add 192.168.28.78/32 dev macvlan0
# bring the interface up
sudo ip link set macvlan0 up
# add a route
sudo ip route add 192.168.28.64/28 dev macvlan0
These add and configure the macvlan0
network device. In particular:
- The device is called
macvlan0
, the same as the docker network. - It’s connected to the
eno1
device. - We give the host the
192.168.28.78
address specified earlier. This is the highest available address in the192.168.28.64/28
CIDR. We could have used the lowest. Doesn’t matter as long as they’re the same.
Testing it
To test it, I’m going to start two nginx containers. To tell them apart, I’m going to steal something from an earlier blog post and use a volume mount to replace the default index page.
mkdir -p tmp/nginx-1 tmp/nginx-2
echo 'One' > tmp/nginx-1/index.html
echo 'Two' > tmp/nginx-2/index.html
sudo docker run --net=macvlan0 --ip=192.168.28.65 --detach --name nginx-1 -v "$(pwd)/tmp/nginx-1:/usr/share/nginx/html" nginx:alpine
sudo docker run --net=macvlan0 --ip=192.168.28.66 --detach --name nginx-2 -v "$(pwd)/tmp/nginx-2:/usr/share/nginx/html" nginx:alpine
If I browse to http://192.168.28.65
or http://192.168.28.66
from my Windows laptop, I get the expected One
or
Two
page. Success.
After publishing this, my friend Mike asked me on Mastodon whether I could access the host from the container, so I checked:
- curl from another host to either container works.
- curl from the host to either container works.
- running nginx on the host and starting an Ubuntu container; curl from that container to either container works; curl from that container to the host works.
Implementing (Synology)
The steps are basically the same as above, but the numbers are different (the Synology NAS is going to get
192.168.28.32/28
; also eth0
, rather than eno1
):
sudo docker network create \
--driver macvlan \
--subnet 192.168.28.0/24 \
--gateway 192.168.28.1 \
--ip-range 192.168.28.32/28 \
--aux-address 'host=192.168.28.46' \
--opt parent=eth0 macvlan0
sudo ip link add macvlan0 link eth0 type macvlan mode bridge
sudo ip addr add 192.168.28.46/32 dev macvlan0
sudo ip link set macvlan0 up
sudo ip route add 192.168.28.32/28 dev macvlan0
Note that these settings are not persistent.
Testing
Basically the same as above; the HTML files go in /volume1/docker/nginx
.
mkdir -p /volume1/docker/nginx/nginx-1 /volume1/docker/nginx/nginx-2
echo 'One' > /volume1/docker/nginx/nginx-1/index.html
echo 'Two' > /volume1/docker/nginx/nginx-2/index.html
sudo docker run --net=macvlan0 --ip=192.168.28.33 --detach --name nginx-1 -v "/volume1/docker/nginx/nginx-1:/usr/share/nginx/html" nginx:alpine
sudo docker run --net=macvlan0 --ip=192.168.28.34 --detach --name nginx-2 -v "/volume1/docker/nginx/nginx-2:/usr/share/nginx/html" nginx:alpine
And, again, browsing to http://192.168.28.33
or http://192.168.28.34
, I see the expected One
or Two
responses.
Moreover, both of those are accessible from the Ubuntu container started on the Linux host above.
Conclusions
You can (relatively) easily run containers on a Synology NAS with locally-accessible IP addresses. This means that you can run multiple containers using the same port number, or where the NAS is also listening on that port.
Whether you need to is another question. Synology Web Station does rudimentary reverse-proxying, including to containers. It supports associating a different TLS certificate with each service, but the Let’s Encrypt integration is kinda lacking: no wildcards unless you’re using Synology’s DDNS, and it’s very manual.
What’s next?
- I’ve not made the settings persist over a restart. I’ll fix that tomorrow (and update this page). If you’re feeling
impatient, the links above have you covered.
- Something made the link (specifically the route) vanish even without a restart. I’m not sure what, yet.
- It would be nice if we didn’t have to use IP addresses, so I need to do something with DNS.
- This will involve some kind of messing around with the router. Currently, adding host entries requires manual steps and restarting things.
- I solved that already for the K3s cluster, so I’m thinking that running CoreDNS inside a container on the NAS would work. I found a docker plugin for it. If that doesn’t work, I already wrote https://github.com/rlipscombe/dockerns.
- TLS and certificates. I need to look at Let’s Encrypt.
- Actually installing Forgejo.
Follow-ups
- Instead of running multiple instances of
nginx
with different web roots, it’s easier to use thetraefik/whoami
image. - It’s not necessary to specify
--ip=
when starting the container; Docker will just use the next available address from the configured address range.- Once the addresses have run out, you’ll get
Error response from daemon: no available IPv4 addresses on this network's address pools
. - The container will be created, but not started.
- If you stop and delete another container, you’ll be able to start the failing container.
- Once the addresses have run out, you’ll get
- This is useful, because Container Manager doesn’t allow you to specify the IP address, even though it does allow you
to specify
macvlan0
as the attached network. - You can get the assigned IP addresses with
sudo docker network inspect macvlan0 | jq '.[].Containers[] | {Name,IPv4Address}'
- But you probably do want to specify an IP address. You want fixed IP addresses so you can put your containers in DNS, right?