Docker Networking: The Complete Guide for 2026
Every container needs to communicate — with other containers, with the host, with external services, or with the internet. Docker's networking subsystem is what makes all of this possible, and understanding it is the difference between a working development setup and a production-ready infrastructure. Yet networking remains one of the most misunderstood parts of Docker, leading to containers that cannot reach each other, ports that mysteriously conflict, and DNS resolution that works in development but breaks in production.
This guide covers Docker networking from the fundamentals through production-grade configurations. If you are new to Docker, start with our Docker Containers for Beginners guide. If you already use Docker Compose, you will find that the Docker Compose guide pairs well with the networking concepts covered here.
Table of Contents
- Docker Networking Fundamentals
- Network Drivers: Bridge, Host, Overlay, Macvlan, None
- Default Bridge vs User-Defined Bridge
- Container-to-Container Communication
- Port Mapping and Publishing
- DNS Resolution in Docker Networks
- Docker Compose Networking
- Multi-Host Networking with Overlay
- Network Security and Isolation
- IPv6 in Docker
- Network Troubleshooting
- Load Balancing with Docker Networks
- Best Practices for Production Networking
- Frequently Asked Questions
1. Docker Networking Fundamentals
When Docker starts, it creates a virtual bridge interface called docker0 on the host. Each container gets its own network namespace — an isolated network stack with its own interfaces, routing table, and iptables rules. Docker connects each container's virtual ethernet interface (veth) to the bridge, creating a private network where containers can communicate.
The key concepts you need to understand:
- Network namespace — each container has an isolated network stack; processes inside a container see different network interfaces than the host
- Virtual ethernet pair (veth) — a pipe connecting the container's namespace to a Docker network bridge; one end is inside the container (usually
eth0), the other end is on the host bridge - Bridge — a virtual Layer 2 switch that forwards traffic between connected containers and optionally to the outside world via NAT
- iptables/nftables — Docker configures firewall rules to handle port mapping, NAT (masquerading), and inter-container traffic filtering
- Embedded DNS — Docker runs an internal DNS server (127.0.0.11) on user-defined networks for service discovery by container name
# See all Docker networks
docker network ls
# Inspect the default bridge network
docker network inspect bridge
# See how Docker configured iptables
sudo iptables -L -n -t nat | grep DOCKER
2. Network Drivers: Bridge, Host, Overlay, Macvlan, None
Docker uses a pluggable driver architecture for networking. Each driver provides different capabilities and trade-offs.
Bridge (default)
The bridge driver creates an isolated network on a single host. Containers on the same bridge can communicate directly. This is the default driver and the right choice for most single-host deployments.
# Create a bridge network
docker network create my-app-net
# Run containers on it
docker run -d --name api --network my-app-net nginx:alpine
docker run -d --name db --network my-app-net postgres:16-alpine
# api can now reach db by name: ping db
Host
The host driver removes network isolation — the container shares the host's network namespace directly. No port mapping is needed because the container binds to host ports. This gives maximum performance but zero network isolation.
# Container binds directly to host port 80
docker run -d --network host nginx:alpine
# No -p flag needed; nginx listens on host port 80 directly
# Only works on Linux (not Docker Desktop on Mac/Windows)
Overlay
The overlay driver creates a distributed network spanning multiple Docker hosts. It requires Docker Swarm mode and uses VXLAN encapsulation to tunnel traffic between hosts.
# Initialize swarm mode (on manager node)
docker swarm init
# Create an overlay network
docker network create --driver overlay --attachable my-overlay
# Services on any swarm node can now communicate
docker service create --name api --network my-overlay myapp:latest
Macvlan
The macvlan driver assigns a real MAC address to each container, making it appear as a physical device on the network. Containers get IP addresses from the physical network's DHCP server or a static range. This is useful when containers need to be directly addressable on the LAN.
# Create a macvlan network on the host's eth0 interface
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
my-macvlan
# Container gets an IP on the 192.168.1.0/24 network
docker run -d --network my-macvlan --ip 192.168.1.100 nginx:alpine
None
The none driver disables all networking. The container only has a loopback interface. Use this for containers that process data without any network access — batch jobs, file processing, or security-sensitive workloads.
# Container with no network access
docker run --network none alpine ping google.com
# ping: bad address 'google.com' — network is disabled
3. Default Bridge vs User-Defined Bridge
This is one of the most important distinctions in Docker networking. The default bridge network and user-defined bridge networks behave very differently.
# Default bridge — containers CANNOT resolve each other by name
docker run -d --name web1 nginx:alpine
docker run -d --name web2 nginx:alpine
docker exec web1 ping web2
# ping: bad address 'web2' — DNS does not work
# User-defined bridge — containers CAN resolve each other by name
docker network create app-net
docker run -d --name api --network app-net nginx:alpine
docker run -d --name db --network app-net postgres:16-alpine
docker exec api ping db
# PING db (172.20.0.3): 56 data bytes — works!
Key differences between default and user-defined bridge networks:
| Feature | Default Bridge | User-Defined Bridge |
|---|---|---|
| DNS resolution | No (IP only) | Yes (by container name) |
| Isolation | All containers share it | Only explicitly connected containers |
| Live connect/disconnect | No (requires restart) | Yes |
| Configurable subnet | Limited | Full control |
| Network aliases | No | Yes |
Rule of thumb: never use the default bridge network for anything beyond quick testing. Always create a user-defined network.
4. Container-to-Container Communication
Containers on the same user-defined network can communicate freely using container names as hostnames. Containers on different networks are completely isolated from each other unless you explicitly connect them to both networks.
# Create two isolated networks
docker network create frontend
docker network create backend
# Place containers on their respective networks
docker run -d --name nginx --network frontend nginx:alpine
docker run -d --name api --network backend myapp:latest
docker run -d --name db --network backend postgres:16-alpine
# nginx cannot reach api or db (different networks)
docker exec nginx ping api # fails
# Connect api to the frontend network as well
docker network connect frontend api
# Now nginx can reach api, and api can reach db
docker exec nginx ping api # works
docker exec api ping db # works
# nginx still cannot reach db (no shared network)
Network Aliases
Aliases let a container respond to multiple DNS names on a network. This is useful for service migration or when multiple containers should respond to the same name (round-robin DNS).
# Give the container an alias
docker run -d --name postgres-primary \
--network backend \
--network-alias db \
--network-alias database \
postgres:16-alpine
# Other containers can reach it as "db", "database",
# or "postgres-primary"
5. Port Mapping and Publishing
By default, container ports are not accessible from outside the Docker network. Port mapping publishes a container port to a host port, making it reachable from the host machine and external clients.
# Basic port mapping: -p hostPort:containerPort
docker run -d -p 8080:80 nginx:alpine
# Access at http://localhost:8080
# Map to a specific host interface (localhost only)
docker run -d -p 127.0.0.1:8080:80 nginx:alpine
# Only accessible from the host, not from external machines
# Random host port (Docker picks an available port)
docker run -d -p 80 nginx:alpine
docker port <container_id> # see the assigned port
# UDP port mapping
docker run -d -p 5353:53/udp coredns:latest
# Multiple port mappings
docker run -d -p 80:80 -p 443:443 nginx:alpine
# Port range
docker run -d -p 8000-8010:8000-8010 myapp:latest
publish vs expose
These two concepts are often confused:
-p/--publish— maps a container port to a host port, making it accessible from outside Docker. This modifies iptables rules on the host.--expose/EXPOSEin Dockerfile — documents which ports the container listens on but does NOT publish them. It is metadata only. Containers on the same network can already reach any port without expose.
# EXPOSE in Dockerfile is documentation only
EXPOSE 3000
# --expose at runtime is also documentation only
docker run --expose 3000 myapp:latest
# -p actually publishes the port to the host
docker run -p 3000:3000 myapp:latest
6. DNS Resolution in Docker Networks
Docker's embedded DNS server is one of its most powerful networking features. On user-defined networks, every container automatically gets a DNS entry matching its name, and Docker resolves these names to the correct container IP addresses.
# Docker's embedded DNS server runs at 127.0.0.11
docker run --rm --network my-net alpine cat /etc/resolv.conf
# nameserver 127.0.0.11
# DNS resolves container names to IPs
docker exec api nslookup db
# Name: db
# Address: 172.20.0.3
# DNS also resolves network aliases
docker exec api nslookup database
# Name: database
# Address: 172.20.0.3
How DNS Resolution Works
- A container makes a DNS query (e.g., lookup "db")
- The query goes to Docker's embedded DNS at 127.0.0.11
- Docker checks if "db" matches any container name or alias on the same network
- If found, Docker returns the container's IP address
- If not found, Docker forwards the query to the host's configured DNS servers
Custom DNS Configuration
# Set custom DNS servers for a container
docker run --dns 8.8.8.8 --dns 1.1.1.1 myapp:latest
# Add a custom /etc/hosts entry
docker run --add-host db.local:192.168.1.50 myapp:latest
# Configure DNS at the daemon level (daemon.json)
# { "dns": ["8.8.8.8", "1.1.1.1"] }
7. Docker Compose Networking
Docker Compose automatically creates a user-defined bridge network for each project and connects all services to it. This is why services in a Compose file can reach each other by service name out of the box — no manual network creation needed.
# docker-compose.yml
services:
api:
image: myapp:latest
ports: ["3000:3000"]
db:
image: postgres:16-alpine
cache:
image: redis:7-alpine
# Compose creates: myproject_default network
# api can reach db at hostname "db" and cache at hostname "cache"
Custom Networks in Compose
Define custom networks to isolate groups of services. A frontend network handles external traffic while a backend network keeps the database internal:
services:
nginx:
image: nginx:alpine
ports: ["80:80"]
networks: [frontend]
api:
build: .
networks: [frontend, backend] # bridge between networks
db:
image: postgres:16-alpine
networks: [backend] # isolated from frontend
cache:
image: redis:7-alpine
networks: [backend]
networks:
frontend:
backend:
# nginx -> api (via frontend)
# api -> db, cache (via backend)
# nginx cannot reach db or cache directly
External Networks
Connect Compose services to networks managed outside the project, letting multiple Compose projects share a network:
# Create the shared network manually, then reference it in Compose
docker network create shared-services
# In any docker-compose.yml that needs to join:
services:
api:
image: api:latest
networks: [shared, default]
networks:
shared:
external: true
name: shared-services
Network Configuration Options in Compose
networks:
backend:
driver: bridge
ipam:
config:
- subnet: 172.28.0.0/16
gateway: 172.28.0.1
8. Multi-Host Networking with Overlay
Overlay networks let containers on different Docker hosts communicate as if they were on the same local network. This is the foundation of Docker Swarm service discovery and the key to multi-host deployments.
Setting Up an Overlay Network
# Initialize swarm on the manager node
docker swarm init --advertise-addr 192.168.1.10
# Join worker nodes to the swarm
docker swarm join --token SWMTKN-1-xxx 192.168.1.10:2377
# Create an overlay network
docker network create --driver overlay --attachable my-overlay
# Deploy a service across multiple nodes
docker service create \
--name api \
--network my-overlay \
--replicas 3 \
myapp:latest
# All 3 replicas can communicate regardless of which node they run on
Overlay with Encryption
# Enable IPSec encryption for overlay traffic between hosts
docker network create --driver overlay --opt encrypted secure-overlay
# Performance impact: ~10-15% overhead from encryption
Under the hood, overlay networks use VXLAN (Virtual Extensible LAN) to encapsulate Layer 2 frames inside UDP packets. Docker manages the VXLAN tunnel endpoints automatically on each host, and a gossip protocol distributes network state across swarm nodes.
9. Network Security and Isolation
Docker networks provide isolation by default, but production deployments require deliberate security configuration.
Network Segmentation
# Isolate tiers with separate networks
docker network create --internal db-net # --internal blocks external access
docker network create app-net
docker network create dmz-net
# Database: only on internal network (no internet access)
docker run -d --name db --network db-net postgres:16-alpine
# API: bridges app-net and db-net
docker run -d --name api --network app-net myapp:latest
docker network connect db-net api
# Nginx: bridges dmz-net and app-net
docker run -d --name nginx --network dmz-net -p 443:443 nginx:alpine
docker network connect app-net nginx
# Result: internet -> nginx -> api -> db
# db cannot reach the internet (--internal network)
Internal Networks
The --internal flag creates a network with no external connectivity. Containers on an internal network can communicate with each other but cannot reach the internet or the host network. This is ideal for databases and other backend services that should never make outbound connections.
# In Docker Compose
networks:
db-internal:
internal: true # no external access
Disable Inter-Container Communication
# Disable ICC on a bridge network
docker network create --opt com.docker.network.bridge.enable_icc=false isolated-net
# Containers on this network cannot reach each other
# They can still reach the outside world via NAT
# Useful for multi-tenant scenarios
Bind to Localhost Only
# DANGEROUS: binds to all interfaces (0.0.0.0)
docker run -p 5432:5432 postgres:16-alpine
# SAFE: binds to localhost only
docker run -p 127.0.0.1:5432:5432 postgres:16-alpine
# Set the default bind address in daemon.json
# { "ip": "127.0.0.1" }
10. IPv6 in Docker
Docker supports IPv6, but it is not enabled by default. Enabling IPv6 lets containers communicate over IPv6 with each other and with external IPv6 hosts.
Enable IPv6 in the Docker Daemon
# /etc/docker/daemon.json
{
"ipv6": true,
"fixed-cidr-v6": "fd00::/80",
"experimental": true,
"ip6tables": true
}
# Restart Docker after changing daemon.json
sudo systemctl restart docker
IPv6 on User-Defined Networks
# Create a dual-stack network (IPv4 + IPv6)
docker network create \
--ipv6 \
--subnet 172.20.0.0/16 \
--subnet fd00:dead:beef::/48 \
dual-stack-net
# Containers get both IPv4 and IPv6 addresses
docker run -d --name web --network dual-stack-net nginx:alpine
docker inspect web | grep -A5 "Networks"
IPv6 in Docker Compose
networks:
app-net:
enable_ipv6: true
ipam:
config:
- subnet: 172.28.0.0/16
- subnet: "fd00:db8::/64"
Note that IPv6 in Docker still requires careful attention to host firewall rules, since Docker's automatic iptables management does not always cover ip6tables. Verify your firewall configuration after enabling IPv6.
11. Network Troubleshooting
When containers cannot communicate, systematic debugging saves hours of guessing. Here is a structured troubleshooting workflow.
Step 1: Verify Network Connectivity
# Check which networks a container is on
docker inspect --format='{{json .NetworkSettings.Networks}}' my-container | python3 -m json.tool
# List all containers on a specific network
docker network inspect my-net --format='{{range .Containers}}{{.Name}} {{.IPv4Address}}{{"\n"}}{{end}}'
# Test basic connectivity
docker exec api ping -c 3 db
# Test DNS resolution
docker exec api nslookup db
docker exec api getent hosts db
Step 2: Inspect Network Configuration
# Full network details
docker network inspect my-net
# Check port mappings
docker port my-container
# View container's network interfaces
docker exec my-container ip addr show
docker exec my-container ip route
# Check the container's resolv.conf
docker exec my-container cat /etc/resolv.conf
Step 3: Deep Debugging with nsenter
# Get the container's PID
PID=$(docker inspect --format '{{.State.Pid}}' my-container)
# Enter the container's network namespace
sudo nsenter --target $PID --net
# Now you can run any network tool in the container's namespace
ss -tlnp # listening ports
ip addr # network interfaces
ip route # routing table
iptables -L -n # firewall rules
# Capture traffic (install tcpdump on the host)
sudo nsenter --target $PID --net tcpdump -i eth0 -n port 5432
Common Issues and Fixes
# Issue: "Name resolution failure" between containers
# Fix: Use a user-defined network (not the default bridge)
docker network create my-net
docker run --network my-net --name api myapp:latest
# Issue: Container cannot reach the internet
# Fix: Check DNS and routing
docker exec my-container ping 8.8.8.8 # test IP connectivity
docker exec my-container nslookup google.com # test DNS
# If IP works but DNS fails, check --dns settings
# Issue: "Port already in use"
# Fix: Find and stop the conflicting process
sudo ss -tlnp | grep :8080
# Or change the host port: -p 8081:80
# Issue: Containers on different Compose projects cannot communicate
# Fix: Use an external shared network
docker network create shared
# Add to both compose files as external network
12. Load Balancing with Docker Networks
Docker provides built-in load balancing mechanisms, and you can layer external load balancers on top for production-grade traffic distribution.
Docker's Built-In DNS Round Robin
# Create multiple containers with the same network alias
docker network create lb-net
docker run -d --name web1 --network lb-net --network-alias web nginx:alpine
docker run -d --name web2 --network lb-net --network-alias web nginx:alpine
docker run -d --name web3 --network lb-net --network-alias web nginx:alpine
# DNS lookups for "web" round-robin across all three containers
docker run --rm --network lb-net alpine nslookup web
# Returns multiple IP addresses
Swarm Internal Load Balancing
# Deploy a service with multiple replicas
docker service create \
--name api \
--network my-overlay \
--replicas 3 \
--publish published=80,target=3000 \
myapp:latest
# Docker's routing mesh distributes traffic across replicas
# Requests to any swarm node on port 80 reach a healthy replica
# Internal VIP load balancing: services get a virtual IP that
# load-balances across all task replicas
External Load Balancers
For production workloads, use a dedicated reverse proxy like nginx or Traefik in front of your application containers:
services:
traefik:
image: traefik:v3.0
command:
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks: [frontend]
api:
image: myapp:latest
deploy:
replicas: 3
labels:
- "traefik.http.routers.api.rule=Host(`api.example.com`)"
- "traefik.http.services.api.loadbalancer.server.port=3000"
networks: [frontend, backend]
networks:
frontend:
backend:
13. Best Practices for Production Networking
Always Use User-Defined Networks
Never rely on the default bridge. User-defined networks give you DNS resolution, better isolation, and the ability to connect and disconnect containers without restarting them.
Segment Your Networks by Function
# Three-tier architecture
networks:
dmz: # public-facing (reverse proxy)
app: # application tier
data:
internal: true # database tier, no internet access
services:
nginx:
networks: [dmz, app]
api:
networks: [app, data]
db:
networks: [data]
Bind Published Ports to Localhost
# Instead of exposing to all interfaces:
ports:
- "5432:5432" # accessible from anywhere
# Bind to localhost only:
ports:
- "127.0.0.1:5432:5432" # only accessible from the host
Use Internal Networks for Backend Services
Databases, caches, and message queues should never need outbound internet access. Use internal: true to prevent them from reaching the internet.
Do Not Publish Database Ports
If your API container connects to the database over a Docker network, there is no reason to publish the database port to the host. Remove the ports: section from database services entirely.
Configure Custom Subnets to Avoid Conflicts
# Docker's default subnet pool can conflict with corporate VPNs
# Configure a specific range in daemon.json
{
"default-address-pools": [
{ "base": "10.10.0.0/16", "size": 24 }
]
}
# Or set per-network subnets
docker network create --subnet 10.10.1.0/24 my-net
Set Appropriate MTU
# Cloud environments often have non-standard MTU (e.g., 1450 for VPCs)
docker network create --opt com.docker.network.driver.mtu=1450 cloud-net
# In daemon.json for all networks:
{ "mtu": 1450 }
Monitor and Document Your Network
# Check container network stats
docker stats --format "table {{.Name}}\t{{.NetIO}}"
As your Docker deployment grows, maintain a diagram of which services are on which networks, which ports are published, and how traffic flows. The Compose file serves as partial documentation, but a visual diagram catches issues that YAML cannot.
Frequently Asked Questions
What is the difference between the default bridge network and a user-defined bridge network?
The default bridge network (docker0) is created automatically and provides basic connectivity, but containers on it can only communicate by IP address, not by name. User-defined bridge networks provide automatic DNS resolution so containers can reach each other by container name or service name, offer better isolation since containers must be explicitly connected, allow live connection and disconnection without restarting containers, and support configurable subnets and gateways. You should always use user-defined bridge networks in production instead of the default bridge.
How does DNS resolution work inside Docker networks?
Docker runs an embedded DNS server at 127.0.0.11 inside every user-defined network. When a container performs a DNS lookup for another container's name, this embedded DNS server resolves it to that container's IP address on the shared network. In Docker Compose, services automatically get DNS entries matching their service name, so a service named db is reachable at hostname db from any other service on the same network. The DNS server also supports network aliases, allowing multiple DNS names for a single container. The default bridge network does not have this embedded DNS — only the legacy --link mechanism works there.
When should I use the host network driver instead of bridge?
Use the host network driver when you need maximum network performance and cannot afford the overhead of network address translation (NAT). With host networking, the container shares the host's network namespace directly — there is no network isolation, and the container binds to host ports without any port mapping. This is useful for network-intensive applications like load balancers, monitoring agents, or services that need to see the true client IP without proxy protocol. The trade-off is that you lose port isolation (only one container can bind to port 80) and the container can see all host network interfaces. Host networking only works on Linux, not on Docker Desktop for Mac or Windows.
How do I connect containers across multiple Docker hosts?
Use the overlay network driver, which creates a distributed network spanning multiple Docker hosts. Overlay networks require Docker Swarm mode to be initialized. Once Swarm is active, you create an overlay network with docker network create --driver overlay my-overlay and services deployed to the swarm can communicate across hosts as if they were on the same local network. For standalone containers (not swarm services), create the overlay with the --attachable flag. Overlay networks use VXLAN encapsulation to tunnel traffic between hosts, and Docker handles the routing, encryption (optional with --opt encrypted), and service discovery automatically.
How do I troubleshoot Docker networking issues?
Start with docker network inspect <network> to verify which containers are connected and their IP addresses. Use docker exec <container> ping <target> or docker exec <container> nslookup <service> to test connectivity and DNS resolution. For deeper inspection, use nsenter --target <PID> --net to enter a container's network namespace and run tcpdump or ss. Check docker logs <container> for connection errors, verify port mappings with docker port <container>, and confirm iptables rules with sudo iptables -L -n -t nat. Common issues include containers on different networks, DNS not working on the default bridge, firewall rules blocking traffic, and port conflicts on the host.
Conclusion
Docker networking is the connective tissue of containerized applications. The core principles are straightforward: use user-defined bridge networks for single-host deployments, overlay networks for multi-host clusters, and always segment your networks by function. The default bridge network exists for backward compatibility — avoid it in production. DNS resolution, port mapping, and network isolation are the three pillars that everything else builds on.
Start with a simple user-defined bridge network, understand how DNS resolves container names, and build up from there. Add network segmentation when your application has multiple tiers, overlay networks when you need multiple hosts, and always keep security in mind by binding ports to localhost, using internal networks for backend services, and never publishing database ports to the world.
Learn More
- Docker Compose: The Complete Guide — services, networks, volumes, and production deployment with Compose
- Docker Multi-Stage Builds Guide — build smaller, faster images with multi-stage Dockerfiles
- Nginx Load Balancing: The Complete Guide — production load balancing strategies for containerized apps
- Docker Cheat Sheet — quick reference for Docker commands including networking