Dan Levy's Avatar DanLevy.net

Essential Docker Security Tips for Self-Hosting

Secure your self-hosted services, from defense to monitoring!

Essential Docker Security Tips for Self-Hosting

Table of Contents

🧗‍♀️ For the brave

If you’re self-hosting Docker services, security is your responsibility from top to bottom—no cloud provider to shield you from port scans or sloppy config. Whether you’re spinning up apps on your home network or renting VPSes from providers like Vultr, DigitalOcean, Linode, AWS, Azure, or Google Cloud, you’ll need to lock things down - and verify you did it right.

In this guide, we’ll walk through Docker security—from some lesser-known to other difficult-to-get-right techniques; we’ll explore canary tokens, read-only volumes, firewall rules, network segmentation & hardening, adding authenticated proxies, and more.

We’ll also compare home networks to public cloud setups and show you how to set up a basic auth proxy with Nginx. By the end, you’ll have several options to keep out the riff-raff (friends, family, and sometimes even yourself…)

That’s a ton of stuff! But much of it relates, and you can pick and choose what’s most relevant to your setup. 🍀

🔄 The :latest Dance

Keeping images updated is crucial for security. However, relying on :latest can lead to breaking changes or vulnerabilities creeping in unnoticed.

The Safe Way to Update

Combine startup commands with pull or build to ensure you’re always running the latest image.

update-and-run.sh
#!/bin/bash
docker compose pull && \
docker compose up -d

Version Pinning vs Latest

Choosing the right version to pin to is a balancing act between stability and security. Here are some common strategies:

docker-compose.yml
# ...
# Exact version pinning, best for critical services
image: postgres:17.2.1
# Patch version pinning, good for non-critical services
image: postgres:17.2
# Major version pinning, perfect for hobby projects
image: postgres:17
# Yolo, avoid if possible
image: postgres:latest

Use Dependabot or Renovate to automate version updates and ensure you’re reviewing changes before they break production.

Let me know about your favorite tools for keeping Docker images up-to-date!

🔐 Secrets Management

There are many ways to manage secrets, but one of the most important rules to stick to is: never hard-code secrets into your docker images or commit them to git. It’s one of the most common security mistakes, it presents a long-term risk, and it’s a pain to fix.

Securely storing secrets is a substantial topic with many options, from .env files, Docker secrets, 1Password/Bitwarden, or a secrets manager like HashiCorp Vault or AWS Secrets Manager.

You’ll have to choose the “right” level of effort & security for your use case.

Generate Strong Secrets

Here’s a small script to generate new secrets for an .env file:

generate-secrets.sh
#!/bin/bash
generate_secret() {
local length=${1:-30}
local generate_length=$((length + 4))
openssl rand -base64 "$generate_length" | tr -d '+=/\n' | cut -c1-"$length"
}
[ -f .env ] && { echo ".env file already exists!"; exit 1; }
cat > .env << EOL
POSTGRES_PASSWORD=$(generate_secret)
JWT_SECRET=$(generate_secret 64)
SESSION_KEY=$(generate_secret 24)
REDIS_PASSWORD=$(generate_secret 20)
UNSAFE_PLACEHOLDER=__WARNING_REPLACE_RANDOM_TEXT__
EOL
echo "New .env file generated with secure random values!"

Canary Tokens

Canary Tokens are a great way to detect if your secrets have been compromised (and used.) They’re like a tripwire you can add to any sensitive files, urls, and tokens.

Put them in every .env file, CI platform, password and secrets manager you use!

There are many types of canary “tokens” to chose, from AWS tokens, fake credit card numbers, Excel & Word files, Kubeconfig files, VPN credentials, even sql dump files can have a tripwire!

Canary Token Best Practices

Upgrade from .env to MacOS Keychain

For Mac folks, one of the simplest options is to use Keychain.

Here’s a simple way to automate loading secrets from the OSX keychain, supports TouchID, and is a bit more secure than .env files.

The original credit goes to Brian Hetfield and Jan Schaumann

Helper commandsPersist secrets in environmentUse secrets per command
keychain-secrets.sh
### Functions for setting and getting environment variables from the OSX keychain ###
### Adapted from: https://www.netmeister.org/blog/keychain-passwords.html and
### https://gist.github.com/bmhatfield/f613c10e360b4f27033761bbee4404fd
# Use: get-keychain-secret SECRET_ENV_VAR
function get-keychain-secret () {
security find-generic-password -w -a ${USER} -D "environment variable" -s "${1}"
}
# Use: set-keychain-secret SECRET_ENV_VAR
# You will be prompted to enter the secret value!
function set-keychain-secret () {
[ -n "$1" ] || print "Missing environment variable name"
# prompt user for secret
echo -n "Enter secret for ${1}"
read secret
[ -n "$secret" ] || return 1
( [ -n "$1" ] || [ -n "$secret" ] ) || return 1
security add-generic-password -U -a ${USER} -D "environment variable" -s "${1}" -w "${secret}"
}
~/code/app/.env-secrets.sh
source ~/keychain-secrets.sh
# Load Env vars into the current shell
export AWS_ACCESS_KEY_ID=$(get-keychain-secret AWS_ACCESS_KEY_ID);
export AWS_SECRET_ACCESS_KEY=$(get-keychain-secret AWS_SECRET_ACCESS_KEY);
# Note: If an attack can run `env` in your shell, then these secrets could be exposed!
~/code/app/scripts/env-run.sh
#!/usr/bin/env bash
source ~/keychain-secrets.sh
# Specify all secrets for this project
AWS_ACCESS_KEY_ID=$(get-keychain-secret AWS_ACCESS_KEY_ID) \
AWS_SECRET_ACCESS_KEY=$(get-keychain-secret AWS_SECRET_ACCESS_KEY) \
"$@"
# Note: Using a shell wrapper helps prevent secrets from staying
# around in the environment. And it's safe to commit.
# Usage:
# ./scripts/env-run.sh docker compose up -d
# ./scripts/env-run.sh docker run -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS ...

🌐 Network Hazard

Custom Networks & Internal Ports

Properly isolating services with Docker networks is an important way to reduce your attack surface area.

Be careful poking holes in your network! One misconfigured port forward can end very badly.

By default, services on a private LAN won’t be exposed to the internet-you have to explicitly forward ports from your router.

Docker on LAN

Whether you’re a developer running dev servers locally, or self-hosting services from your local network, assumptions about docker’s network model can lead to trouble.

Devs are often surprised to find the ‘traditional’ methods to secure linux servers (iptables, restricting tcp/ip sysctl options) can fail silently on Docker hosts! This is especially the case when self-hosting-or running on a typical home network. (For the people in the back: This can allow access to dev containers on your MacBook!!!)

⚠️ Warning #1: By default, Docker (on Ubuntu/Debian) will bypass UFW/iptables rules, rendering your firewall useless. See issue #690: Docker bypasses ufw firewall rules.

⚠️ Warning #2: Binding ports to local IP addresses (e.g., -p 127.0.0.1:8080:80) may offer limited protection in certain cases. See issue #45610: Publishing ports explicitly to private networks should not be accessible from LAN hosts. (Impacts Fedora, Ubuntu, and likely others.)

If you’re surprised to learn this, same!

Binding to local IPs is still a good practice and has a meaningful impact in managed cloud environments and specially configured networks.

Example Docker Compose

Here’s an example docker-compose.yml file that binds the app service to 127.0.0.1:8080 and connects both containers to the backend custom network.

docker-compose.yml
networks:
backend:
services:
app:
networks:
- backend
ports:
# Bind to localhost if possible
- "127.0.0.1:8080:8080"
# ... other settings
database:
image: postgres:17.1
# No ports needed; accessible inside backend network.
networks:
- backend

Network Best Practices

🛡️ Access Controls

Access controls are a critical part of securing your Docker services. This includes limiting container capabilities & permissions, restricting access to the Docker socket, and more.

Limiting Container Capabilities

Another solid access control practice is to limit the capabilities of your containers. This can prevent several threats, from privilege escalation, data theft/exfiltration, traffic hijacking, and more.

What are capabilities? Linux kernel-defined, named permissions or abilities. (The capabilities man page has a full list.) They include things like CAP_CHOWN (change file ownership), CAP_NET_ADMIN (configure network interfaces), CAP_KILL (kill any process), and many more.

The two ways to determine needed capabilities are:

  1. Trial and Error: This slower-but-effective method has you start with no capabilities, then add them back one by one until your app works.
  2. Find prior work: Search for “project-name cap_drop Dockerfile”, or “project-name cap_drop docker-compose.yml” to see if others have already done the work for you. Sometimes ChatGPT can conjure up the right configuration for you, too!

Capabilities Best Practice

Example: Drop/Limit Capabilities
services:
database:
image: postgres:17.1
networks: [ db-network ]
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_READ_SEARCH
- FOWNER
- SETGID
- SETUID
db-admin:
image: dpage/pgadmin4:4.1
networks: [ db-network ]
ports:
- "8081:80"
# ... other settings
networks:
db-network:

Now your services can communicate with each other via the db-network network. The auto-net network will be created automatically by Docker Compose.

Use the --external/external: option to join a pre-existing network. Omit it to create a new network.

Docker Socket Access

⚠️ Warning: docker.socket is extremely powerful and dangerous dark magic!

⚠️ The `:ro` option doesn’t affect I/O sent over the socket!

It merely ensures the file (a Unix socket, not a file in the traditional sense) itself is read-only.

Socket Best Practice

Blocking Country!

Another decent idea!

Talking about the geopolitical entity, not the music…

If you are hosting apps mostly for your local family & friends, you can block traffic from countries you don’t expect to receive traffic from. Or only allow traffic from countries you do expect.

Check out this script to block all traffic from China (sorry, China):

block-china.sh
curl -fsSL https://www.ipdeny.com/ipblocks/data/countries/cn.zone | \
while read line; do ufw deny from $line to any; done

Similarly, you can allow only traffic from the US:

allow-usa.sh
curl -fsSL https://www.ipdeny.com/ipblocks/data/countries/us.zone | \
while read line; do ufw allow from $line to any; done

Hardening CloudFlare Proxy Host

If your home server is protected behind a CloudFlare IP (proxy,) you can restrict access to only CloudFlare IPs, and your local network.

This is a bit similar to Country blocking above, but with much tighter control.

whitelist-ingress-from-cloudflare.sh
ufw default deny incoming # Block all incoming!!!
ufw default allow outgoing # Allow all outgoing
ufw allow ssh # Allow SSH
# Allow access for local subnet (preferably dedicated DMZ/VLAN for hosted services)
ufw allow from 10.0.0.0/8 to any port 443
# Allow CloudFlare IPs
curl -fsSL https://www.cloudflare.com/ips-v4 | \
while read line; do ufw allow from $line to any port 443; done
# Add IPv6 support
# curl -fsSL https://www.cloudflare.com/ips-v6 | \
# while read line; do ufw allow from $line to any port 443; done

To test geo-based changes a VPN with locations in the desired country can be useful. See more in Monitoring & Verification section.

App Layer Security

Once your network and host are security hardened, you may find there’s more to do.

Now we need to think about the “application” layer of our services themselves.

Does that database have a valid password? Does this container automate HTTPS/certs? Does the app include built-in auth? Are there limits on which emails can signup? Are there default credentials or environment variable to change?

The only way to know is to check. In this case, start with the README and other key files like docker-compose.yml, Dockerfile, and .env.*. In both the project, and ideally its supporting services as well. (e.g. Postgres, Redis, etc.)

Reverse Proxy

Another layer of defense is basic auth. I know it’s dangerous to use without HTTPS, but sometimes it’s the best you can do (legacy services), and it’s often enough to stop automated Cross-Site-Request-Forgery attacks.

/etc/nginx/conf.d/secure-admin.conf
location /admin {
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://internal_admin:80;
proxy_set_header X-Real-IP $remote_addr;
}

Generate credentials:

Terminal window
htpasswd -c /etc/nginx/.htpasswd admin

With a basic auth proxy, attackers have an extra hurdle—username and password—before hitting your internal service.

Another option is to use a service like Traefik or Caddy that can automate HTTPS and basic auth for you.

If you want to manage many domains & services with a GUI, I’d recommend Nginx Proxy Manager.

🔍 Monitoring & Verification

This is the most important & most overlooked step. You can have the best firewall, the best network, and the best practices, but if you don’t verify, you have no idea if it’s working.

Plus, knowing just a handful of commands-or where to look them up-can mean the difference in preventing a breach. The feeling of being a hacker is just a bonus. (For details and examples, jump ahead to the Monitoring & Verification section.)

Don’t Trust, Verify Twice

Check Your Ports

⚠️ IMPORTANT: Do not scan hosts you do not own.

Whether you’re on a home network or a VPS, you will want to know what ports are open to the world.

There are 2 ways to do this:

Testing Outside Your Network

You’ll need your current (public) IP, easily with services like ifconfig.me: curl https://ifconfig.me. Or look it up in your hosting provider’s dashboard.

Get Public IP
curl -fsSL https://ifconfig.me
# --> CURRENT PUBLIC IP

Once you have your public IP, you now need to connect to an external network. You can use a friend’s computer, a phone/5G hot spot, or a dedicated server host.

nmap External Scan
target_host="$(curl -fsSL https://ifconfig.me)"
# Note: Ensure `target_host` is the desired IP
# Scan specific ports:
nmap -A -p 80,443,8080 --open --reason $target_host
# Top 100 ports:
nmap -A --top-ports 100 --open --reason $target_host
# All ports
nmap -A -p1-65535 --open --reason $target_host

Test Inside Your Network

Practice using nmap, scan your local network or one of your servers, check your router, printer, smart fridge.

Example Scan Commands

Terminal window
# Scan your localhost for all open ports
nmap -sT localhost
# Scan your machine’s private IP for services
nmap -sV 192.168.1.10
# Find service details on your network
nmap -sn 192.168.0.0/24
nmap -sn 10.0.0.0/24
# Or on a docker 172.18.0.1/16
nmap -sn 172.18.0.1/16
nmap Scan
% nmap -A --open --reason 192.168.0.87
Starting Nmap 7.95 ( https://nmap.org ) at 2025-01-06 13:51 MST
Nmap scan report for dev02.local (192.168.0.87)
Host is up, received syn-ack (0.0067s latency).
Not shown: 995 closed tcp ports (conn-refused)
PORT STATE SERVICE REASON VERSION
22/tcp open ssh syn-ack OpenSSH 9.6p1 Ubuntu 3ubuntu13.5 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
|_ 256 {FINGERPRINT} (ED25519)
80/tcp open http syn-ack Caddy httpd
|_http-server-header: Caddy
|_http-title: Dev02.DanLevy.net
443/tcp open ssl/https syn-ack
|_http-title: Dev02.DanLevy.net
1234/tcp open http syn-ack Node.js Express framework
|_http-cors: GET POST PUT DELETE PATCH
|_http-title: Dev02.DanLevy.net (application/json; charset=utf-8).
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 13.36 seconds

View Open Ports

Get familiar with lsof - it’s available on MacOS & Linux. It shows granular network state and disk activity.

lsof Commands
# Monitor specific port
sudo lsof -i:80 -Pn
# Monitor ESTABLISHED connections
sudo lsof -i -Pn | grep ESTABLISHED
# View LISTEN
sudo lsof -i -Pn | grep LISTEN
# to see network names instead of IP addresses (can be very slow to do reverse DNS lookups)
sudo lsof -i -P | grep LISTEN
# Monitor all network connections
sudo watch -n1 "lsof -i -Pn"

Example Output

nmap scan for listeners

File monitoring

To identify which processes are using the most hard drive bandwidth, you can use iotop:

Terminal window
sudo iotop

To see individual file changes, you can use inotifywait on Linux or fswatch on MacOS:

This can be useful to detect unauthorized or strange behavior per folder or system wide.

Terminal window
# Monitor all file changes in a directory
sudo inotifywait -m /path/to/directory

On MacOS you can use fswatch:

Install with brew install fswatch

Terminal window
fswatch -r /path/to/directory

⏰ Often Overlooked Tips

  1. Rate Limiting for authentication attempts & any other key endpoints. Whether via Nginx’s limit_req module or fail2ban for SSH access, throttling brute-force is probably a good idea. I say probably because in the age of IPv6 and botnets-for-cheap, well, it’s not what it used to be.

  2. Use Read-Only Volumes where possible:

    services:
    webapp:
    volumes:
    - ./config:/config:ro

    Combined with other best practices (non-root users, minimal folder permissions), the `:ro` volume mount option provides additional safe-guards against accidental (or malicious) changes to critical files.

  3. Audit Container Access regularly. If a container doesn’t need it a secret, port or mount, remove it!

  4. Beware of WiFi Riff-Raff I’m sure you’d never give out your WiFi password, especially to any weirdos, right? Well, except some friends… Okay, maybe family too. You never know what apps they have and which might share your SSID & password with the world.

Home Network vs. Public Provider vs. Tunneling

  1. Virtual Isolation/DMZ: For home servers, put them on a separate VLAN or DMZ if possible. This keeps your internal devices off-limits to potential compromise from the server side.

    • Use a separate router or VLAN for your home server.
    • Use a separate WiFi network for your home server.
    • Use a separate subnet for your home server.
  2. Cloud Providers: Hetzner, Vultr, DigitalOcean, Linode, AWS, Azure, and Google Cloud all provide different firewall features.

    • Some providers & services block ports by default. Some offer opt-ins or add-ons. Check your service provider’s documentation.
    • Many providers offer advanced monitoring and threat detection services.
  3. VPNs & Tunneling: Consider using a VPN-like option or tunneling service to securely connect services across the internet without exposing them to the public internet.

    • TailScale, ngrok, ZeroTier.
    • WireGuard, OpenVPN.

🚀 Production Checklist

📚 Further Reading

Thanks

A shout-out to some keen Redditors:

Thanks for reading! I hope you found this guide helpful. If you have any questions or suggestions, feel free to reach out on my socials below, or feel free to click the Edit on GitHub link to create a PR! ❤️

Edit on GitHubGitHub