4 min read

TIL: Docker Networks

Table of Contents

Snippets of learning.


TL;DR

When reverse-proxying on the same docker host, avoid publishing container ports.

Don’t use:

services:
  foo:
    ports:
    - "80:80"
...

Do use:

services:
  foo:
    networks:
    - foo_net

networks:
  foo_net:
    external: true

Motivation

In my private homelab setup, I found myself writing many blocks of exposed ports in docker-compose.yaml files, like so:

  portainer:
    image: portainer/portainer-ce:latest
    restart: unless-stopped
    ports:
      - "8000:8000"
      - "9443:9443"
      - "9000:9000"
    ...

To then connect with my Caddyfile and expose it on my tailnet:

portainer.my-domain.tld {
        import tls_cert
        reverse_proxy <vm-ip>:9000
}

This is a bit wonky for a few reasons:

  1. caddy makes an outgoing TCP connection to the vm’s IP, re-enter docker via the portainer container’s published port, forward that back to the container. Like so:

Client → Caddy → VM IP → Docker NAT → Portainer container

  1. (following 1) This sends traffic out of the host and back in
  2. Eating up a lot of ports. Not that there’s a shortage, but there is chance of overlap conflicts
  3. It loses isolation of services
  4. The docker-compose.yaml is less terse

Port Listening Versus Publishing

When caddy and other services are running on the same docker network, they can all reach each other without explicit port opening. The above docker-compose.yaml can be shortened:

portainer:
  image: portainer/portainer-ce:latest
  restart: unless-stopped
  expose:
    - "9000"

Where port 9000 is no longer published to the host vm. Instead, portainer listens ‘inside’ the container. Now, nothing outside docker can access it directly.

This is also still unnecessary: expose acts more like a comment here. Containers on the same docker network can access portainer:9000 either way. The above becomes:

portainer:
  image: portainer/portainer-ce:latest
  restart: unless-stopped

Similarly, caddy doesn’t need to refer to the vm’s IP. Docker provides built-in DNS-based service discovery between containers on the same network. Service names are resolved via docker’s internal DNS. The Caddyfile becomes:

portainer.my-domain.tld {
    import tls_cert
    reverse_proxy portainer:9000
}

Container Networking

Missing in the above section is setting up the “same docker network”. Docker does this by default when setting up services via a single compose project. But they must be in the same compose action. Like so:

services:
  caddy:
    image: caddy

  portainer:
    image: portainer/portainer-ce

In this case, docker will create a, project-scoped, default bridge network, the above Caddyfile works and life is hunky dory.

Things are decidedly neither hunky nor dory when multiple docker-compose.yaml files are used to manage discrete services. In my homelab case, each service gets its own compose. First, this is how Portainer demands things. Second, the separation is convenient for versioning, updates, and general cleanliness.

  • portainer-compose.yaml
  • caddy-compose.yaml
  • jellyfin-compose.yaml

To handle this, docker allows creation of external shared networks.

networks:
  caddy_net:
    external: true

Which can be attached to any container from any number of docker-compose.yaml services to be run:

services:
  caddy:
    networks:
      - caddy_net

Wrapping It Together

When a caddy_net is created (via cli, manually, once):

$ docker network create caddy_net

All ports references can be removed from potentially all of the docker-compose.yaml services running in the homelab. Leaving a more-secure, terser setup:

services:
  moocup:
    image: jellydeck/moocup:latest
    restart: unless-stopped
    networks:
      - caddy_net

networks:
  caddy_net:
    external: true

Extra: Downsides

Two problems arise from this:

  1. Port confusion. Reading a docker-compose.yaml file for a given service is now less clear on which port(s) are in use. Solvable by comments, or by reading the Caddyfile.
  2. Non-Caddy access is now impossible. Caddy is unlikely to go down, without other services also going down, but it’s possible. Or it may just be desirable to access a specific service directly (maybe DNS to my-domain.tld is unavailable). If access to a service is desperately needed, then there will almost certainly be direct SSH/exec into the container.

For the purposes of this homelab, neither problem outweighs the nicety of using a shared network.

Extra: Multi-Network Containers

Containers can be attached to multiple networks, allowing internal app communication on one network and reverse-proxy access on another. This is beyond the current needs of the homelab.