Docker Compose: The Complete Guide for 2026
Real-world applications are never a single container. A typical web application needs an application server, a database, a cache, and often a reverse proxy, a background worker, and a message queue. Managing each of these with individual docker run commands — remembering every port mapping, volume mount, network, and environment variable — is tedious and error-prone. Docker Compose solves this by letting you define your entire multi-container application in a single YAML file and manage it with simple commands.
This guide covers Docker Compose from first principles through production deployment. If you are new to Docker itself, start with our Docker Containers for Beginners guide and the complete Docker guide first.
docker-compose.yml faster with our Docker Compose Generator, then verify it with the Compose Validator.
127.0.0.1 (or on a private subnet), use SSH tunneling to access it safely without opening firewall ports. Generate commands with our SSH Tunnel Builder.
Table of Contents
- What is Docker Compose and Why Use It
- docker-compose.yml Syntax and Structure
- Services, Networks, and Volumes
- Environment Variables and .env Files
- Multi-Container Applications
- Health Checks
- Docker Compose Commands Reference
- Docker Compose v2 vs v1
- Production Deployment Best Practices
- Real-World Examples
- Debugging and Troubleshooting
- Frequently Asked Questions
1. What is Docker Compose and Why Use It
Docker Compose is a tool for defining and running multi-container Docker applications. You describe your services, networks, and volumes in a docker-compose.yml file, then use docker compose up to start everything in the correct order. A single command replaces dozens of docker run invocations.
Consider what it takes to run a simple web application with a database and cache without Compose:
# Without Compose: 5 commands, easy to get wrong
docker network create myapp-net
docker volume create pgdata
docker run -d --name db --network myapp-net \
-e POSTGRES_PASSWORD=secret -v pgdata:/var/lib/postgresql/data postgres:16-alpine
docker run -d --name cache --network myapp-net redis:7-alpine
docker run -d --name web --network myapp-net -p 3000:3000 \
-e DATABASE_URL=postgres://postgres:secret@db:5432/postgres \
-e REDIS_URL=redis://cache:6379 myapp:latest
With Compose, this becomes a declarative YAML file and one command:
# docker-compose.yml
services:
web:
image: myapp:latest
ports: ["3000:3000"]
environment:
DATABASE_URL: postgres://postgres:secret@db:5432/postgres
REDIS_URL: redis://cache:6379
depends_on: [db, cache]
db:
image: postgres:16-alpine
environment: { POSTGRES_PASSWORD: secret }
volumes: [pgdata:/var/lib/postgresql/data]
cache:
image: redis:7-alpine
volumes:
pgdata:
# One command to start everything:
# docker compose up -d
Why Compose matters:
- Declarative configuration — your infrastructure is version-controlled YAML, not a series of shell commands
- Reproducible environments — every developer runs the same stack with
docker compose up - Dependency management — services start in the right order with health check awareness
- Isolated projects — each Compose project gets its own network, preventing port and name collisions
- Single-command lifecycle — start, stop, rebuild, and tear down the entire stack in one command
2. docker-compose.yml Syntax and Structure
A Compose file has four top-level keys: services, networks, volumes, and optionally configs and secrets. The file uses standard YAML syntax — indentation matters, and colons separate keys from values.
# Complete structure of a docker-compose.yml
services: # Required: define your containers
web:
image: nginx:1.25-alpine
# ... service configuration
api:
build: ./backend
# ... service configuration
networks: # Optional: custom networks
frontend:
backend:
volumes: # Optional: named volumes for persistence
db-data:
cache-data:
configs: # Optional: configuration files
nginx-conf:
file: ./nginx.conf
secrets: # Optional: sensitive data
db-password:
file: ./db-password.txt
Each service definition supports dozens of options. Here are the most important ones:
services:
myservice:
# Image or build (one is required)
image: nginx:1.25-alpine # Use a pre-built image
build: # Or build from a Dockerfile
context: ./app
dockerfile: Dockerfile.prod
target: production # Multi-stage build target
args:
NODE_ENV: production
# Networking
ports:
- "8080:80" # host:container
- "127.0.0.1:9090:9090" # bind to localhost only
expose:
- "3000" # expose to other services only
networks:
- frontend
- backend
# Data
volumes:
- db-data:/var/lib/data # named volume
- ./src:/app/src # bind mount
- /app/node_modules # anonymous volume
# Configuration
environment: # inline variables
NODE_ENV: production
env_file: # or from a file
- .env.production
command: ["npm", "start"] # override CMD
entrypoint: ["/entrypoint.sh"] # override ENTRYPOINT
working_dir: /app
# Lifecycle
depends_on:
db:
condition: service_healthy
restart: unless-stopped # no | always | on-failure | unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Resources
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
3. Services, Networks, and Volumes
Services
Each service defines a container that Compose manages. A service can use a pre-built image from a registry or build from a local Dockerfile. Compose names containers using the pattern <project>-<service>-<number>, where the project name defaults to the directory name.
services:
# Service using a pre-built image
db:
image: postgres:16-alpine
# Service built from local source
api:
build:
context: .
dockerfile: Dockerfile
# The image is built and tagged automatically
Networks
Compose creates a default network for every project. All services join this network and can reach each other by service name. Custom networks let you isolate groups of services:
services:
nginx:
image: nginx:1.25-alpine
networks: [frontend, backend] # connected to both
api:
build: .
networks: [backend] # only backend
db:
image: postgres:16-alpine
networks: [backend] # only backend
networks:
frontend: # nginx can talk to external clients
backend: # api and db are isolated from frontend
# Result: nginx can reach api, api can reach db
# But external traffic cannot reach db directly
Volumes
Named volumes persist data beyond the container lifecycle. They are the correct way to handle database storage, file uploads, and any data that must survive restarts:
services:
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data # named volume for persistence
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro # bind mount, read-only
web:
build: .
volumes:
- ./src:/app/src # bind mount for live reloading (dev)
- /app/node_modules # anonymous volume preserves container deps
volumes:
pgdata: # Docker manages storage location
driver: local
4. Environment Variables and .env Files
Compose supports multiple methods for injecting configuration into containers, from inline definitions to external files.
Inline Environment Variables
services:
api:
image: myapp:latest
environment:
# Map syntax
NODE_ENV: production
PORT: "3000"
# Array syntax (equivalent)
# - NODE_ENV=production
# - PORT=3000
The .env File
Compose automatically loads a .env file from the project directory. Variables from this file are used for ${VARIABLE} substitution in the Compose file itself:
# .env (loaded automatically)
POSTGRES_USER=appuser
POSTGRES_PASSWORD=s3cretP@ss
POSTGRES_DB=myapp
APP_VERSION=2.3.1
# docker-compose.yml uses ${VARIABLE} substitution
services:
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
api:
image: myapp:${APP_VERSION}
environment:
DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
Per-Service env_file
services:
api:
image: myapp:latest
env_file:
- .env.common # shared across services
- .env.api # api-specific variables
worker:
image: myapp:latest
command: ["node", "worker.js"]
env_file:
- .env.common
- .env.worker
Variable Precedence
When the same variable is defined in multiple places, this order takes priority (highest first):
- Shell environment variables (already set in your terminal)
--env-fileflag on the CLI:docker compose --env-file .env.prod upenvironment:key in the Compose fileenv_file:key in the Compose file- The project
.envfile
5. Multi-Container Applications
The power of Compose is orchestrating services that work together. Here is a typical web application stack with a web server, API, database, and cache:
services:
# Reverse proxy (entry point for all HTTP traffic)
nginx:
image: nginx:1.25-alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
api:
condition: service_healthy
restart: unless-stopped
# Application server
api:
build: .
environment:
DATABASE_URL: postgres://${DB_USER}:${DB_PASS}@db:5432/${DB_NAME}
REDIS_URL: redis://cache:6379
NODE_ENV: production
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
restart: unless-stopped
# Database
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Cache
cache:
image: redis:7-alpine
command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis-data:/data
restart: unless-stopped
volumes:
pgdata:
redis-data:
The depends_on key with condition: service_healthy ensures services start in the correct order: the database must pass its health check before the API starts, and the API must be healthy before nginx begins routing traffic to it.
6. Health Checks
Without health checks, Docker only knows if a process is running — not if it is actually working. A Node.js server might be running but returning 500 errors on every request. Health checks let Docker verify that a service is functioning correctly.
# Health check syntax
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s # time between checks
timeout: 10s # max time for a single check
retries: 3 # failures before marking unhealthy
start_period: 40s # grace period for startup (failures don't count)
Common Health Check Patterns
# PostgreSQL
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d mydb"]
interval: 10s
timeout: 5s
retries: 5
# MySQL
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
# Redis
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
# HTTP endpoint (Node.js, Python, Go, etc.)
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# wget alternative (for Alpine images without curl)
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
Using Health Checks for Dependency Ordering
services:
api:
build: .
depends_on:
db:
condition: service_healthy # wait until db passes health check
cache:
condition: service_started # just wait until cache container starts
migrations:
condition: service_completed_successfully # wait until migrations finish
7. Docker Compose Commands Reference
# Lifecycle
docker compose up -d # start all services in background
docker compose up -d --build # rebuild images before starting
docker compose down # stop and remove containers + networks
docker compose down -v # also remove volumes (destroys data!)
docker compose stop # stop without removing
docker compose start # start previously stopped services
docker compose restart # restart all services
docker compose restart api # restart a single service
# Building
docker compose build # build all images
docker compose build api # build a single service image
docker compose build --no-cache # force fresh build
# Viewing
docker compose ps # list running services
docker compose ps -a # include stopped services
docker compose logs # view all logs
docker compose logs -f api # follow logs for a service
docker compose logs --tail 50 api # last 50 lines
docker compose top # running processes in each service
# Running commands
docker compose exec api sh # shell into running service
docker compose exec db psql -U user # run psql in database container
docker compose run --rm api npm test # run one-off command (creates new container)
# Scaling
docker compose up -d --scale worker=5 # run 5 instances of worker service
# Configuration
docker compose config # validate and view resolved config
docker compose config --services # list service names
docker compose config --volumes # list volume names
# Cleanup
docker compose down --rmi all # remove images too
docker compose down --rmi local # remove only locally built images
8. Docker Compose v2 vs v1
Docker Compose v1 (the Python-based docker-compose with a hyphen) reached end of life in July 2023. Docker Compose v2 is a Go-based plugin for the Docker CLI, invoked as docker compose with a space. If you are starting a new project, you are already using v2.
# v1 (deprecated, end of life)
docker-compose up -d
docker-compose down
docker-compose logs
# v2 (current, use this)
docker compose up -d
docker compose down
docker compose logs
Key Differences
- Performance — v2 is significantly faster for large projects because it is written in Go and runs as a Docker CLI plugin rather than a separate process
- Container naming — v1 used underscores (
myproject_web_1), v2 uses hyphens (myproject-web-1) - Compose Specification — v2 follows the open Compose Specification standard, which adds features like
profiles,service_completed_successfullycondition, and GPU support - No version field required — v2 does not require the
version:key at the top of the file; it is ignored if present - Build improvements — v2 uses BuildKit by default for faster, more efficient image builds
- Profiles — v2 adds service profiles for selectively starting groups of services
Profiles (v2 Feature)
services:
web:
image: myapp:latest
# no profile: always starts
db:
image: postgres:16-alpine
# no profile: always starts
debug:
image: busybox
profiles: ["debug"] # only starts when profile is activated
monitoring:
image: prometheus:latest
profiles: ["monitoring"]
# Start only default services:
docker compose up -d
# Start with debug tools:
docker compose --profile debug up -d
# Start with monitoring:
docker compose --profile monitoring up -d
9. Production Deployment Best Practices
Docker Compose is a legitimate production deployment tool for single-server applications. These practices ensure reliability and security.
Always Set Restart Policies
services:
api:
restart: unless-stopped # restarts on crash, survives host reboot
# does not restart if you manually stop it
# Other options:
# restart: "no" # never restart (default)
# restart: always # always restart, even after manual stop
# restart: on-failure # only restart on non-zero exit code
Set Resource Limits
services:
api:
deploy:
resources:
limits:
memory: 512M # hard cap
cpus: '1.0'
reservations:
memory: 256M # guaranteed minimum
cpus: '0.25'
Use Specific Image Tags
# BAD: mutable, unpredictable
image: postgres:latest
image: redis
image: myapp
# GOOD: pinned, reproducible
image: postgres:16.2-alpine3.19
image: redis:7.2-alpine
image: myapp:v2.3.1-abc1234
Secure Your Secrets
# Never commit .env files with real credentials to version control
# Use .env.example as a template
# .env.example (committed)
POSTGRES_USER=appuser
POSTGRES_PASSWORD=changeme
POSTGRES_DB=myapp
# .env (not committed, listed in .gitignore)
POSTGRES_USER=appuser
POSTGRES_PASSWORD=r3alS3cretP@ssw0rd
POSTGRES_DB=myapp_production
Configure Logging
services:
api:
logging:
driver: json-file
options:
max-size: "10m" # rotate after 10MB
max-file: "3" # keep 3 rotated files
tag: "{{.Name}}" # tag logs with container name
Do Not Expose Database Ports
services:
db:
image: postgres:16-alpine
# NO ports: section! Only accessible within the Docker network
networks: [backend]
api:
build: .
ports: ["3000:3000"] # only the API is exposed
networks: [backend]
networks:
backend:
10. Real-World Examples
WordPress + MySQL
services:
wordpress:
image: wordpress:6.4-apache
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: ${WP_DB_USER}
WORDPRESS_DB_PASSWORD: ${WP_DB_PASS}
WORDPRESS_DB_NAME: ${WP_DB_NAME}
volumes:
- wp-content:/var/www/html/wp-content
depends_on:
db:
condition: service_healthy
restart: unless-stopped
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASS}
MYSQL_DATABASE: ${WP_DB_NAME}
MYSQL_USER: ${WP_DB_USER}
MYSQL_PASSWORD: ${WP_DB_PASS}
volumes:
- db-data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
wp-content:
db-data:
Node.js + PostgreSQL + Redis
services:
api:
build:
context: .
target: production
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://${DB_USER}:${DB_PASS}@db:5432/${DB_NAME}
REDIS_URL: redis://cache:6379
NODE_ENV: production
depends_on:
db: { condition: service_healthy }
cache: { condition: service_healthy }
healthcheck:
test: ["CMD", "node", "-e", "fetch('http://localhost:3000/health').then(r => process.exit(r.ok ? 0 : 1))"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
restart: unless-stopped
deploy:
resources:
limits: { memory: 512M, cpus: '1.0' }
worker:
build:
context: .
target: production
command: ["node", "worker.js"]
environment:
DATABASE_URL: postgres://${DB_USER}:${DB_PASS}@db:5432/${DB_NAME}
REDIS_URL: redis://cache:6379
depends_on:
db: { condition: service_healthy }
cache: { condition: service_healthy }
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
deploy:
resources:
limits: { memory: 1G, cpus: '2.0' }
cache:
image: redis:7-alpine
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- redis-data:/data
restart: unless-stopped
volumes:
pgdata:
redis-data:
11. Debugging and Troubleshooting
Service Fails to Start
# Check logs for the failing service
docker compose logs api
# Check if the container exited and why
docker compose ps -a
# Run the service interactively to debug
docker compose run --rm api sh
# Validate your Compose file syntax
docker compose config
Database Connection Refused
# The most common cause: API starts before database is ready
# Fix: use health checks + depends_on condition
depends_on:
db:
condition: service_healthy
# Verify database is reachable from the API container
docker compose exec api sh -c "nc -zv db 5432"
# Check database logs
docker compose logs db
Port Conflicts
# Error: "Bind for 0.0.0.0:5432 failed: port is already allocated"
# Another container or host process is using the port
# Find what is using the port
sudo lsof -i :5432
# Solutions:
# 1. Stop the conflicting process
# 2. Change the host port mapping
ports:
- "5433:5432" # use host port 5433 instead
# 3. Remove the port mapping if external access is not needed
Volume Permission Issues
# Container runs as non-root but volume was created by root
# Fix: set ownership in Dockerfile or entrypoint
RUN mkdir -p /app/data && chown -R appuser:appgroup /app/data
# Or use init containers to fix permissions
services:
init-permissions:
image: alpine
command: chown -R 1000:1000 /data
volumes: [app-data:/data]
profiles: ["setup"]
Compose File Validation
# Validate and print the resolved configuration
docker compose config
# This catches YAML syntax errors, invalid keys,
# and shows the final resolved values after variable substitution
# Check only service names
docker compose config --services
# Check only volume names
docker compose config --volumes
Inspecting Networks
# List project networks
docker network ls --filter "name=myproject"
# Inspect a network to see connected containers and IPs
docker network inspect myproject_default
# Test DNS resolution from inside a container
docker compose exec api nslookup db
docker compose exec api ping -c 3 db
Frequently Asked Questions
What is the difference between Docker Compose v1 and v2?
Docker Compose v1 was a standalone Python binary invoked as docker-compose with a hyphen. Docker Compose v2 is a Go plugin integrated directly into the Docker CLI, invoked as docker compose with a space. V2 is significantly faster, supports the Compose Specification standard, adds features like service profiles and GPU access, and is the only version receiving updates since v1 reached end of life in July 2023. The docker-compose.yml file format is largely compatible between versions, but v2 handles edge cases like dependency ordering and build contexts more reliably.
How do I pass environment variables to Docker Compose services?
Docker Compose supports several methods for environment variables. You can define them inline using the environment key in your service definition. You can use a .env file in the same directory as your docker-compose.yml for variable substitution with ${VARIABLE} syntax. You can reference external env files per service using the env_file key. Shell environment variables override .env file values, and you can use the --env-file flag to specify a different .env file. For production, use env_file to keep secrets out of version control and load different configurations per environment.
How do health checks work in Docker Compose?
Health checks let Docker monitor whether a service is actually working, not just running. You define a test command, an interval between checks, a timeout for each check, a retry count, and an optional start_period grace window during which failures are not counted. Services can use depends_on with condition: service_healthy to wait for dependencies to pass health checks before starting. Common health check commands include curl -f http://localhost/health for web servers, pg_isready for PostgreSQL, and redis-cli ping for Redis.
Can I use Docker Compose in production?
Yes. Docker Compose is a valid production deployment tool for single-server applications. Add restart policies (restart: unless-stopped or restart: always), resource limits via the deploy key, health checks for all services, named volumes for persistent data, and proper logging configuration. Many successful applications run on a single server with Compose behind a reverse proxy like Traefik or nginx. Compose becomes insufficient when you need multi-server orchestration, automatic scaling, or zero-downtime rolling deployments across a cluster — that is when Kubernetes or similar tools are needed.
How do Docker Compose networks work?
Docker Compose automatically creates a default bridge network for each project and connects all services to it. Services can reach each other by service name as the hostname (for example, postgres://db:5432 where db is the service name). You can define custom networks in the top-level networks key to isolate groups of services — for example, a frontend network and a backend network where the database is only reachable from the API, not from the web server. Services can be attached to multiple networks. External networks let Compose services communicate with containers managed outside the Compose project.
Conclusion
Docker Compose turns multi-container chaos into a single, declarative YAML file. Whether you are running a development environment with hot reloading or deploying a production stack with health checks and resource limits, Compose handles the orchestration so you can focus on your application code.
Start with a simple two-service setup (application plus database), get comfortable with the core commands (up, down, logs, exec), and incrementally add complexity: health checks, custom networks, multiple environments, and resource limits. For single-server deployments, Compose is often all you need.
Learn More
- Docker: The Complete Developer's Guide — comprehensive Docker guide covering images, Dockerfiles, security, CI/CD, and performance
- Docker Containers for Beginners — hands-on introduction if you are new to containers
- Kubernetes YAML Generator — generate K8s manifests when you outgrow single-server Compose
- Docker Cheat Sheet — quick reference for Docker and Compose commands