Docker Networking: The Complete Guide for 2026

Published February 12, 2026 · 30 min read

Docker networking is what turns isolated containers into connected applications. A web server that cannot reach its database is useless. A microservice that cannot find its dependencies is dead on arrival. Understanding how Docker networking works — from the kernel-level plumbing to the high-level abstractions — is essential for building reliable, secure, and performant containerized systems.

This guide covers every aspect of Docker networking: the five network drivers, DNS and service discovery, port mapping, network isolation, multi-host communication, Docker Compose networking, IPv6, performance tuning, and real-world troubleshooting. Whether you are connecting two containers on a laptop or orchestrating services across a cluster, you will find practical examples you can use immediately.

⚙ Quick reference: Keep our Docker Commands Cheat Sheet open while reading. For Docker Compose networking, see the Docker Compose Cheat Sheet.

Table of Contents

  1. Docker Networking Overview and Architecture
  2. Network Drivers: Bridge, Host, Overlay, Macvlan, None
  3. Default Bridge Network
  4. User-Defined Bridge Networks
  5. Container DNS and Service Discovery
  6. Port Mapping and Publishing
  7. Network Isolation and Security
  8. Multi-Host Networking with Overlay
  9. Docker Compose Networking
  10. Network Troubleshooting
  11. IPv6 in Docker
  12. Network Performance Tuning
  13. Best Practices
  14. Frequently Asked Questions

1. Docker Networking Overview and Architecture

Docker networking is built on top of Linux kernel primitives: network namespaces, virtual Ethernet pairs (veth), bridges, and iptables rules. Each container gets its own network namespace, which provides an isolated network stack with its own interfaces, routing table, and firewall rules. Docker then connects these isolated namespaces together using virtual network infrastructure.

When Docker starts, it creates a virtual bridge called docker0 on the host. Each new container gets a virtual Ethernet (veth) pair: one end is placed inside the container as eth0, and the other end is attached to the docker0 bridge. Containers communicate with each other through the bridge and reach the outside world through NAT rules on the host's network interface.

Docker manages all of this automatically. When you run docker run nginx, Docker creates the network namespace, sets up the veth pair, assigns an IP address from the bridge's subnet, configures routing, and sets up iptables rules for NAT — all in milliseconds. Understanding these internals helps when you need to debug connectivity issues or design complex network topologies.

Key Networking Concepts

# View Docker's networking primitives on the host
ip link show                    # See veth pairs and bridges
bridge link show                # See bridge connections
sudo iptables -t nat -L -n     # See Docker's NAT rules
ip netns list                   # See network namespaces (may be empty; Docker uses unlisted namespaces)

2. Network Drivers: Bridge, Host, Overlay, Macvlan, None

Docker uses a pluggable driver architecture for networking. Each driver implements a different networking model suited to different use cases. Here are the five built-in drivers:

Bridge (default)

The bridge driver creates a private internal network on the host. Containers on the same bridge network can communicate with each other. Traffic to the outside world goes through NAT. This is the default for standalone containers and the right choice for most single-host applications.

# Create a bridge network
docker network create --driver bridge my-app-net

# Run a container on that network
docker run -d --name web --network my-app-net nginx

Host

The host driver removes network isolation entirely. The container shares the host's network namespace, meaning it uses the host's IP address and ports directly. No NAT, no port mapping, no veth pairs. This provides maximum network performance but at the cost of isolation.

# Run with host networking (Linux only)
docker run -d --network host nginx
# nginx is now accessible on the host's port 80 directly

Overlay

The overlay driver creates a distributed network spanning multiple Docker hosts. It uses VXLAN encapsulation to tunnel Layer 2 frames through the Layer 3 network between hosts. This is the driver used by Docker Swarm for service-to-service communication across a cluster.

# Create an overlay network (requires Swarm mode)
docker network create --driver overlay --attachable my-overlay-net

Macvlan

The macvlan driver assigns a MAC address to each container, making it appear as a physical device on the network. Containers get IP addresses from the physical network's DHCP server or a statically configured range. This is useful when containers need to appear as first-class citizens on the LAN — for example, legacy applications that require direct Layer 2 connectivity.

# Create a macvlan network
docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 \
  my-macvlan-net

None

The none driver disables networking entirely. The container gets a loopback interface and nothing else. Use this for containers that must be completely network-isolated — batch processing jobs, security-sensitive computations, or containers that communicate only through shared volumes.

# Run with no networking
docker run -d --network none my-batch-job

# Verify: only loopback exists
docker exec my-batch-job ip addr
# 1: lo: <LOOPBACK,UP> inet 127.0.0.1/8

Driver Comparison

Driver     | Isolation | Performance | Multi-Host | Use Case
-----------+-----------+-------------+------------+----------------------------------
bridge     | Good      | Good        | No         | Default single-host networking
host       | None      | Best        | No         | High-throughput, monitoring tools
overlay    | Good      | Moderate    | Yes        | Swarm services, multi-host apps
macvlan    | Good      | Excellent   | No*        | LAN integration, legacy apps
none       | Complete  | N/A         | No         | Network-isolated workloads

* macvlan containers can communicate across hosts via the physical network

3. Default Bridge Network

When Docker installs, it creates a default bridge network named bridge (backed by the docker0 Linux bridge). Every container that does not specify a network is connected to this default bridge. While it works for quick experiments, it has significant limitations that make it unsuitable for real applications.

# Inspect the default bridge network
docker network inspect bridge

# Output (abbreviated):
{
    "Name": "bridge",
    "Driver": "bridge",
    "IPAM": {
        "Config": [{ "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" }]
    }
}

Limitations of the Default Bridge

# Demonstrating the DNS limitation on the default bridge
docker run -d --name db-default postgres:16-alpine
docker run --rm alpine ping -c 2 db-default
# ping: bad address 'db-default'  <-- name resolution fails

# You must use the IP address instead
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db-default
# 172.17.0.2
docker run --rm alpine ping -c 2 172.17.0.2
# PING 172.17.0.2: 64 bytes from 172.17.0.2  <-- works, but fragile

The default bridge exists for backward compatibility. For any real project, always create a user-defined bridge network.

4. User-Defined Bridge Networks

User-defined bridge networks solve every limitation of the default bridge. They provide automatic DNS, better isolation, and the ability to connect or disconnect containers on the fly. This is the network type you should use for nearly all single-host applications.

# Create a user-defined bridge network
docker network create app-net

# Run containers on that network
docker run -d --name db --network app-net postgres:16-alpine
docker run -d --name web --network app-net -p 8080:80 nginx

# Containers can reach each other by name
docker exec web ping -c 2 db
# PING db (172.18.0.2): 56 data bytes
# 64 bytes from 172.18.0.2  <-- automatic DNS resolution works!

Custom Subnets and Gateways

You can control the IP range and gateway used by a user-defined network. This is useful when you need to avoid conflicts with your corporate VPN, existing infrastructure, or other Docker networks.

# Create a network with a custom subnet
docker network create \
  --subnet=10.10.0.0/24 \
  --gateway=10.10.0.1 \
  custom-net

# Assign a specific IP to a container
docker run -d --name api \
  --network custom-net \
  --ip 10.10.0.100 \
  my-api:latest

Connecting to Multiple Networks

A container can be connected to multiple networks simultaneously. This is a powerful pattern for creating network segments where some containers bridge between different application tiers.

# Create frontend and backend networks
docker network create frontend
docker network create backend

# The API container bridges both networks
docker run -d --name api --network frontend my-api
docker network connect backend api

# Web server only on frontend
docker run -d --name web --network frontend nginx

# Database only on backend
docker run -d --name db --network backend postgres:16-alpine

# Result: web can reach api, api can reach db, but web CANNOT reach db

Network Aliases

Network aliases let a container respond to multiple DNS names on a network. This is useful for providing a stable service name that might be backed by different containers over time.

# Create a container with a network alias
docker run -d --name postgres-primary \
  --network app-net \
  --network-alias db \
  --network-alias database \
  postgres:16-alpine

# Both names resolve to the same container
docker exec web ping db           # works
docker exec web ping database     # works
docker exec web ping postgres-primary  # also works

5. Container DNS and Service Discovery

Docker runs an embedded DNS server at 127.0.0.11 inside every container attached to a user-defined network. This DNS server resolves container names, service names, and network aliases to their current IP addresses. When a container restarts and gets a new IP, the DNS records update automatically.

# Container DNS flow:
# Container "web" --> queries "db" --> Docker DNS (127.0.0.11) --> resolves to 172.18.0.3
# If name is not a container/service, Docker forwards to external DNS

How DNS Resolution Works

When a container looks up a hostname, the request goes to Docker's embedded DNS server. If the name matches a container, service, or alias on any network the requesting container is attached to, Docker returns the corresponding IP. If it does not match, Docker forwards the query to the host's configured DNS servers.

# Test DNS resolution inside a container
docker exec web nslookup db
# Server:    127.0.0.11
# Name:      db
# Address:   172.18.0.3

Custom DNS Configuration

# Use custom DNS servers
docker run -d --name web \
  --dns 8.8.8.8 \
  --dns 1.1.1.1 \
  nginx

# Add custom host entries
docker run -d --name web \
  --add-host myservice:10.0.0.50 \
  --add-host host.docker.internal:host-gateway \
  nginx

# Set the container's search domain
docker run -d --name web \
  --dns-search example.com \
  nginx
# Now "api" resolves as "api.example.com"

Round-Robin DNS for Load Distribution

When multiple containers share the same network alias, Docker's DNS returns all of their IP addresses. This provides basic round-robin load distribution at the DNS level.

# Start three workers with the same alias
docker run -d --name worker1 --network app-net --network-alias worker my-worker
docker run -d --name worker2 --network app-net --network-alias worker my-worker
docker run -d --name worker3 --network app-net --network-alias worker my-worker

# DNS returns all three IPs
docker exec web nslookup worker
# Name: worker  Address: 172.18.0.4
# Name: worker  Address: 172.18.0.5
# Name: worker  Address: 172.18.0.6

6. Port Mapping and Publishing

Port mapping is how you expose container services to the outside world. By default, container ports are only accessible within Docker networks. Publishing a port creates an iptables rule that forwards traffic from a host port to a container port.

# Publish a specific host port to a container port
docker run -d -p 8080:80 nginx
# Host port 8080 --> Container port 80

# Publish to a specific host IP (not all interfaces)
docker run -d -p 127.0.0.1:8080:80 nginx
# Only accessible from localhost

# Let Docker choose a random host port
docker run -d -p 80 nginx
docker port $(docker ps -lq)
# 80/tcp -> 0.0.0.0:32768

# Publish multiple ports
docker run -d -p 80:80 -p 443:443 nginx

# Publish a range of ports
docker run -d -p 8000-8010:8000-8010 my-app

# Publish UDP ports
docker run -d -p 53:53/udp my-dns-server

# Publish both TCP and UDP on the same port
docker run -d -p 1234:1234/tcp -p 1234:1234/udp my-app

EXPOSE vs -p (Publish)

The EXPOSE instruction in a Dockerfile is documentation only. It tells developers which ports the application uses, but it does not actually publish anything. Only the -p flag (or -P to publish all exposed ports) at runtime creates the actual port mapping.

# In Dockerfile:
EXPOSE 80 443
# This is documentation; ports are NOT accessible from outside

# At runtime, you still need -p:
docker run -d -p 80:80 -p 443:443 my-image

# Or use -P to publish all EXPOSE'd ports to random host ports:
docker run -d -P my-image

7. Network Isolation and Security

Docker networks provide isolation by default. Containers on different user-defined networks cannot communicate with each other unless explicitly connected to a shared network. This is a fundamental security mechanism for containerized applications.

Network Segmentation

# Example: Three-tier application with network segmentation
#
# [Internet] --> [reverse-proxy] --> [api] --> [database]
#                 (public-net)    (app-net)  (data-net)

docker network create public-net
docker network create app-net
docker network create data-net

# Reverse proxy: public-net + app-net
docker run -d --name proxy --network public-net -p 80:80 nginx
docker network connect app-net proxy

# API server: app-net + data-net
docker run -d --name api --network app-net my-api
docker network connect data-net api

# Database: data-net only
docker run -d --name db --network data-net postgres:16-alpine

# Result:
# - proxy can reach api (shared app-net)
# - api can reach db (shared data-net)
# - proxy CANNOT reach db (no shared network)
# - db is completely isolated from the internet

Internal Networks

An internal network blocks all outbound traffic to the external network. Containers on an internal network can only communicate with each other. This is useful for databases or other services that should never initiate external connections.

# Create an internal-only network
docker network create --internal secure-net

# Database on the internal network
docker run -d --name db --network secure-net postgres:16-alpine

# The database cannot reach the internet
docker exec db ping -c 2 8.8.8.8
# ping: sendto: Network is unreachable

Disabling Inter-Container Communication

# Create a network where containers cannot talk to each other
docker network create --opt com.docker.network.bridge.enable_icc=false isolated-net

# Containers on this network can reach the outside world
# but cannot communicate with each other (only through published ports)

Docker and the Host Firewall

Docker manipulates iptables rules directly to implement port publishing and NAT. This can bypass firewall rules set by UFW, firewalld, or raw iptables. A port published with -p 8080:80 is accessible from the network even if UFW blocks port 8080, because Docker inserts its rules into the DOCKER chain which is evaluated before INPUT.

# See Docker's iptables rules
sudo iptables -L DOCKER -n
sudo iptables -t nat -L DOCKER -n

# To prevent Docker from modifying iptables (use with caution):
# Edit /etc/docker/daemon.json
{
    "iptables": false
}
# Then manage ALL port forwarding manually

# Better approach: bind to localhost when you don't need external access
docker run -d -p 127.0.0.1:5432:5432 postgres:16-alpine

8. Multi-Host Networking with Overlay

Overlay networks allow containers running on different Docker hosts to communicate as if they were on the same local network. Docker uses VXLAN (Virtual Extensible LAN) to encapsulate Layer 2 frames inside Layer 3 UDP packets, creating a virtual network that spans physical boundaries.

Setting Up Overlay Networks

# Step 1: Initialize Swarm on the first host (manager)
docker swarm init --advertise-addr 192.168.1.10

# Step 2: Join additional hosts as workers
docker swarm join --token SWMTKN-1-xxx 192.168.1.10:2377

# Step 3: Create an overlay network
docker network create --driver overlay --attachable my-overlay

# Step 4: Deploy services on the overlay network
docker service create --name web --network my-overlay -p 80:80 nginx
docker service create --name api --network my-overlay my-api

# Containers can communicate across hosts by service name
# web can reach api at "api:8080" regardless of host placement

Encrypted and Attachable Overlay Networks

By default, data plane traffic is not encrypted on overlay networks. Enable encryption for sensitive traffic. Use --attachable to let standalone containers (not just Swarm services) join the overlay.

# Encrypted overlay: all inter-host traffic uses IPsec (~10-15% overhead)
docker network create --driver overlay --opt encrypted --attachable secure-overlay

# Standalone containers can join attachable overlays
docker run --rm -it --network secure-overlay alpine sh

9. Docker Compose Networking

Docker Compose creates networking automatically. When you run docker compose up, Compose creates a dedicated bridge network for the project and connects all services to it. Services can reach each other using the service name as the hostname. For most applications, you do not need to configure networking at all — it works out of the box.

Default Networking

# docker-compose.yml
services:
  web:
    image: nginx
    ports:
      - "8080:80"
  api:
    image: my-api
    environment:
      - DATABASE_URL=postgres://db:5432/myapp
  db:
    image: postgres:16-alpine

# Running "docker compose up" creates:
# - Network: myproject_default (bridge)
# - web can reach api at "api" and db at "db"
# - api can reach db at "db:5432"
# - web is published to host port 8080

Custom Networks in Compose

# docker-compose.yml with custom network segmentation
services:
  proxy:
    image: nginx
    ports:
      - "80:80"
    networks:
      - frontend

  api:
    image: my-api
    networks:
      - frontend
      - backend

  db:
    image: postgres:16-alpine
    networks:
      - backend

  redis:
    image: redis:7-alpine
    networks:
      - backend

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true   # no external access for the backend network

Custom IPAM in Compose

# Specify subnets and static IPs
services:
  api:
    image: my-api
    networks:
      app-net:
        ipv4_address: 10.5.0.10

networks:
  app-net:
    driver: bridge
    ipam:
      config:
        - subnet: 10.5.0.0/24
          gateway: 10.5.0.1

Connecting to External Networks

# Connect to a pre-existing network (created outside Compose)
# First, create the network: docker network create shared-net

services:
  web:
    image: nginx
    networks:
      - default       # the project's default network
      - shared-net    # the external pre-existing network

networks:
  shared-net:
    external: true    # tells Compose this network already exists

Network Aliases in Compose

# Give services additional DNS names
services:
  postgres-primary:
    image: postgres:16-alpine
    networks:
      backend:
        aliases:
          - db
          - database
          - primary-db

networks:
  backend:

For complete Compose networking reference, including depends_on, healthchecks, and service profiles, see our Docker Compose Complete Guide.

10. Network Troubleshooting

Network issues are among the most common problems in containerized applications. Docker provides several tools for inspecting and debugging network configuration.

docker network inspect

# List all networks
docker network ls

# Inspect a network in detail
docker network inspect app-net

# Key information in the output:
# - Subnet and gateway
# - Connected containers with their IP addresses
# - Driver options and labels
# - IPAM configuration

# Filter for specific fields
docker network inspect -f '{{range .Containers}}{{.Name}}: {{.IPv4Address}}{{"\n"}}{{end}}' app-net
# web: 172.18.0.2/16
# api: 172.18.0.3/16
# db: 172.18.0.4/16

Debugging from Inside a Container

# Start a debug container on the same network
docker run --rm -it --network app-net nicolaka/netshoot

# Inside the debug container, you have access to:
ping api                    # Test connectivity
nslookup db                 # Test DNS resolution
dig web                     # Detailed DNS query
curl http://api:8080/health # Test HTTP endpoints
traceroute db               # Trace network path
ss -tlnp                    # List listening ports
ip addr                     # Show network interfaces
ip route                    # Show routing table
tcpdump -i eth0 port 5432   # Capture traffic (need --cap-add NET_ADMIN)

Common Problems and Solutions

# Problem: Container cannot resolve another container's name
# Solution: Ensure both containers are on the SAME user-defined network
docker network inspect app-net  # Check both containers are listed

# Problem: "port is already allocated"
# Solution: Find what is using the port
sudo lsof -i :8080
# or
docker ps --format '{{.Names}}: {{.Ports}}' | grep 8080

# Problem: Container cannot reach the internet
# Solution: Check DNS and routing
docker exec my-container cat /etc/resolv.conf
docker exec my-container ip route
# Default route should point to the gateway

# Problem: Cannot connect to host service from container
# Solution: Use host.docker.internal (Docker Desktop) or the bridge gateway IP
docker exec my-container ping host.docker.internal
# On Linux: docker run --add-host host.docker.internal:host-gateway my-image
# Or use the bridge gateway: 172.17.0.1 (default bridge)

Inspecting Traffic

# Capture traffic on the docker0 bridge
sudo tcpdump -i docker0 -n

# Check Docker logs for network events
docker events --filter type=network

11. IPv6 in Docker

Docker supports IPv6 networking, but it is not enabled by default. With IPv6 adoption growing, many production environments require dual-stack (IPv4 + IPv6) container networking.

Enabling IPv6

# Enable IPv6 in Docker daemon
# Edit /etc/docker/daemon.json:
{
    "ipv6": true,
    "fixed-cidr-v6": "fd00::/80",
    "experimental": true,
    "ip6tables": true
}

# Restart Docker
sudo systemctl restart docker

# Verify IPv6 is enabled
docker network inspect bridge | grep -i ipv6

IPv6 with User-Defined Networks

# Create a dual-stack network
docker network create \
  --ipv6 \
  --subnet=172.20.0.0/16 \
  --subnet=fd00:dead:beef::/48 \
  dual-stack-net

# Run a container on the dual-stack network
docker run -d --name web --network dual-stack-net nginx

# Verify both addresses
docker exec web ip -6 addr show eth0
# inet6 fd00:dead:beef::2/48 scope global

IPv6 Port Publishing

# Publish on all IPv6 interfaces
docker run -d -p [::]:80:80 nginx

# Publish on IPv6 localhost only
docker run -d -p [::1]:80:80 nginx

12. Network Performance Tuning

Docker's default networking configuration is optimized for compatibility, not performance. For high-throughput applications, there are several tuning options that can significantly improve network performance.

Host Networking for Maximum Performance

The single biggest performance gain comes from using host networking, which eliminates all NAT overhead, veth pair traversal, and bridge processing. In benchmarks, host networking typically matches bare-metal network performance.

# ~0% overhead: use host networking
docker run -d --network host my-high-throughput-app

# Performance comparison (approximate):
# host network:  ~0% overhead vs bare metal
# bridge:        ~1-5% overhead for throughput, ~10-50us latency added
# overlay:       ~5-15% overhead (VXLAN encapsulation)
# overlay+enc:   ~15-25% overhead (VXLAN + IPsec)

MTU Configuration

Mismatched MTU values cause packet fragmentation and significant performance degradation. This is especially common with overlay networks, where VXLAN encapsulation adds 50 bytes of overhead.

# Set MTU on a network
docker network create --opt com.docker.network.driver.mtu=1450 my-net

# Set MTU globally in daemon.json
{
    "mtu": 1450
}

# For overlay networks, the inner MTU should account for VXLAN overhead
# If your physical network MTU is 1500, set overlay MTU to 1450
# If your physical network supports jumbo frames (MTU 9000), set overlay to 8950

Kernel Tuning for Container Networking

# Key sysctl settings for high-throughput container networking
sudo sysctl -w net.netfilter.nf_conntrack_max=1048576   # connection tracking table
sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535" # local port range
sudo sysctl -w net.ipv4.tcp_congestion_control=bbr       # BBR congestion control
sudo sysctl -w net.core.rmem_max=16777216                # socket buffer sizes
sudo sysctl -w net.core.wmem_max=16777216
# Make persistent in /etc/sysctl.d/99-docker-network.conf

Userland Proxy

Docker uses a userland proxy (docker-proxy) for port forwarding by default. Disabling it in favor of pure iptables rules improves performance for published ports.

# Disable the userland proxy in daemon.json
{
    "userland-proxy": false
}

# The iptables hairpin NAT rules handle port forwarding instead
# This reduces memory usage and improves throughput for published ports

13. Best Practices

Always Use User-Defined Networks

Never rely on the default bridge network. Create purpose-specific networks for your applications. This gives you automatic DNS, better isolation, and the ability to manage network connections without restarting containers.

Segment Networks by Trust Level

Follow the principle of least privilege for networking. A database should not be on the same network as a public-facing reverse proxy. Create separate networks for each tier and only connect containers to the networks they actually need.

# Good: tiered network segmentation
docker network create public      # reverse proxy
docker network create application  # API servers
docker network create data         # databases, caches

# Bad: everything on one flat network
docker network create my-app  # all containers here

Use Internal Networks for Backend Services

Mark backend networks as internal to prevent containers from making outbound internet connections. This limits the blast radius if a container is compromised.

Bind to Localhost in Development

When running containers locally, bind ports to 127.0.0.1 instead of all interfaces. This prevents your development database from being accessible on the network.

# Development: bind to localhost
docker run -d -p 127.0.0.1:5432:5432 postgres:16-alpine

# Production: explicit interface binding
docker run -d -p 10.0.0.5:5432:5432 postgres:16-alpine

Set Explicit Subnets to Avoid Conflicts

Docker picks subnets automatically, which can conflict with corporate VPNs, cloud VPCs, or other infrastructure. Always specify subnets for production networks.

# Configure Docker's default address pool in daemon.json
{
    "default-address-pools": [
        { "base": "10.200.0.0/16", "size": 24 },
        { "base": "10.201.0.0/16", "size": 24 }
    ]
}

Clean Up Unused Networks

# Remove all networks not used by any container
docker network prune

# Remove a specific network
docker network rm old-net

# List networks with no connected containers
docker network ls --filter "dangling=true"

Document Your Network Architecture

For any non-trivial application, diagram which containers are on which networks and where ports are published. Docker Compose files serve as living documentation — keep them clear and well-commented.

Frequently Asked Questions

What is the difference between the default bridge network and a user-defined bridge network?

The default bridge network (docker0) connects all containers that do not specify a network, but containers can only communicate using IP addresses — not container names. User-defined bridge networks provide automatic DNS resolution, so containers can reach each other by name (for example, ping db instead of ping 172.17.0.3). User-defined bridges also offer better isolation since only explicitly connected containers can communicate, and they allow you to connect and disconnect containers without restarting them. For any multi-container application, you should always use a user-defined bridge network.

How does Docker handle DNS resolution between containers?

Docker runs an embedded DNS server at 127.0.0.11 inside every container connected to a user-defined network. When a container makes a DNS query for another container's name, Docker's DNS server resolves it to that container's IP address on the shared network. This works for container names, service names in Docker Compose, and network aliases. The default bridge network does not get this DNS service — only user-defined networks do. You can also configure custom DNS servers using the --dns flag when running containers.

When should I use host networking instead of bridge networking?

Use host networking (--network host) when your application needs maximum network performance and you want to eliminate the overhead of NAT. Common use cases include high-throughput applications like load balancers, monitoring tools that need to see all host network traffic, and applications that bind to many dynamic ports. The trade-off is that you lose network isolation entirely — the container shares the host's network namespace, so port conflicts are possible. Host networking is only available on Linux.

How do I connect containers across multiple Docker hosts?

Use Docker's overlay network driver, which creates a distributed network spanning multiple hosts. Initialize a Docker Swarm cluster with docker swarm init, create an overlay network with docker network create --driver overlay my-overlay, and deploy services on it. Containers on the overlay network communicate as if they were on the same LAN, regardless of which physical host they run on. The overlay driver uses VXLAN encapsulation to tunnel traffic. For standalone containers, use the --attachable flag on the overlay network.

Why can't my container connect to a service running on the host machine?

Containers on bridge networks cannot use localhost to reach host services because localhost inside the container refers to the container itself. On Docker Desktop (Mac and Windows), use host.docker.internal. On Linux, add --add-host=host.docker.internal:host-gateway to your docker run command. Alternatively, use host networking (--network host) to share the host's network stack, or connect to the Docker bridge gateway IP (typically 172.17.0.1).

How does Docker Compose handle networking automatically?

Docker Compose creates a dedicated bridge network for each project, named <project-dir>_default. Every service in the Compose file is connected to this network and can reach other services using the service name as the hostname. For example, a web service connects to a db service at db:5432. You can define custom networks in the Compose file to segment services, use internal networks for backend isolation, or connect to external pre-existing networks. Compose supports all Docker network drivers.

Conclusion

Docker networking is the connective tissue that turns individual containers into working applications. From the default bridge that gets you started to overlay networks spanning a cluster, Docker provides networking primitives for every scale and requirement.

The most important takeaways: always use user-defined bridge networks instead of the default bridge, segment your networks by trust level, understand that Docker's iptables rules can bypass your host firewall, and choose the right network driver for your performance and isolation requirements. For Docker Compose projects, the default networking works well for most applications — you only need custom network configuration when you want to enforce isolation between service tiers.

If you are debugging a networking issue, start with docker network inspect to verify containers are on the same network, use a netshoot container for detailed diagnostics, and check DNS resolution before assuming a connectivity problem. Most Docker networking issues come down to containers being on different networks or DNS not resolving as expected.

⚙ Essential tools: Validate Docker Compose files with the Docker Compose Validator, convert configs with JSON to YAML, and keep our Docker Commands Cheat Sheet handy.

Learn More

Related Resources

Docker Complete Guide
Master Docker from basics to advanced production patterns
Docker Compose Guide
Multi-container orchestration with Docker Compose
Docker for Beginners
Practical beginner's guide to Docker containers
Docker Commands Cheat Sheet
Essential Docker commands quick reference
Docker Compose Cheat Sheet
Docker Compose syntax and commands reference
Docker Compose Validator
Validate and lint Docker Compose YAML files