Docker Containers for Beginners: A Practical Guide
Docker changed how we build, ship, and run software. Instead of hearing "it works on my machine," you package your application and its entire environment into a container that runs identically everywhere. This guide walks you through Docker from zero to productive, with hands-on examples at every step.
1. What Are Containers and Why Docker?
A container is a lightweight, standalone package that includes everything an application needs to run: code, runtime, libraries, and system tools. Unlike virtual machines, containers share the host operating system's kernel, making them start in seconds and use a fraction of the memory.
Here is how containers compare to virtual machines:
Virtual Machine Container
+----------------------------+ +----------------------------+
| App A | App B | App C| | App A | App B | App C |
|----------|----------|------| |--------|--------|---------|
| Bins/Libs| Bins/Libs| Bins | | Bins | Bins | Bins |
|----------|----------|------| +----------------------------+
| Guest OS | Guest OS | Guest| | Docker Engine |
+----------------------------+ +----------------------------+
| Hypervisor | | Host Operating System |
+----------------------------+ +----------------------------+
| Host Operating System | | Hardware |
+----------------------------+ +----------------------------+
| Hardware |
+----------------------------+
Why Docker specifically?
- Consistency — identical environments from development to production
- Isolation — each container runs independently; a crash in one does not affect others
- Speed — containers start in seconds, not minutes like VMs
- Portability — runs on any machine with Docker installed (Linux, macOS, Windows)
- Efficiency — shares the host kernel, so 10 containers use far less RAM than 10 VMs
- Ecosystem — Docker Hub hosts hundreds of thousands of pre-built images
2. Installing Docker
Docker Desktop is the easiest way to get started on macOS and Windows. On Linux, install Docker Engine directly.
Linux (Ubuntu/Debian)
# Remove old versions
sudo apt-get remove docker docker-engine docker.io containerd runc
# Install using the official convenience script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Add your user to the docker group (avoids needing sudo)
sudo usermod -aG docker $USER
# Log out and back in, then verify
docker --version
docker run hello-world
macOS / Windows
Download Docker Desktop from docker.com/products/docker-desktop, run the installer, and start the application. Docker Desktop includes Docker Engine, Docker CLI, and Docker Compose.
Verify Installation
$ docker --version
Docker version 27.5.1, build 9f9e405
$ docker compose version
Docker Compose version v2.32.4
3. Docker Basics: Images, Containers, and Registries
Three concepts form the foundation of Docker:
Images are read-only templates used to create containers. Think of an image as a class in object-oriented programming. It defines what the container will look like but does not run by itself.
Containers are running instances of images. If an image is a class, a container is an object. You can create multiple containers from the same image, each running independently.
Registries are repositories where images are stored and shared. Docker Hub is the default public registry. Organizations often run private registries for internal images.
# The relationship:
Registry (Docker Hub)
└── Image (nginx:latest)
├── Container 1 (web-server-prod)
├── Container 2 (web-server-staging)
└── Container 3 (web-server-dev)
4. Essential Docker Commands
These are the commands you will use every day. For a comprehensive reference, see our Docker Commands Cheat Sheet.
Running Containers
# Pull and run an nginx web server
docker run -d -p 8080:80 --name my-web nginx
# Breakdown:
# -d = run in detached mode (background)
# -p 8080:80 = map host port 8080 to container port 80
# --name = give the container a name
# nginx = the image to use
Visit http://localhost:8080 and you will see the nginx welcome page.
Listing and Inspecting
# List running containers
docker ps
# List ALL containers (including stopped)
docker ps -a
# View container logs
docker logs my-web
# Follow logs in real-time
docker logs -f my-web
# Inspect container details (IP, mounts, config)
docker inspect my-web
Executing Commands Inside Containers
# Open a shell inside a running container
docker exec -it my-web /bin/bash
# Run a single command
docker exec my-web cat /etc/nginx/nginx.conf
# Breakdown:
# -i = interactive (keep stdin open)
# -t = allocate a pseudo-TTY
Building Images
# Build an image from a Dockerfile in the current directory
docker build -t my-app:1.0 .
# Build with a specific Dockerfile
docker build -f Dockerfile.prod -t my-app:prod .
Stopping and Removing
# Stop a running container
docker stop my-web
# Remove a stopped container
docker rm my-web
# Stop and remove in one step
docker rm -f my-web
# Remove an image
docker rmi nginx
# Clean up everything unused (containers, images, networks, cache)
docker system prune -a
5. Writing Dockerfiles
A Dockerfile is a text file with instructions to build a Docker image. Each instruction creates a layer in the image.
A Complete Example: Node.js Application
# Start from an official Node.js base image
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (for better caching)
COPY package.json package-lock.json ./
# Install dependencies
RUN npm ci --only=production
# Copy the rest of the application code
COPY . .
# Document which port the app listens on
EXPOSE 3000
# Define the command to run the application
CMD ["node", "server.js"]
Build and run it:
docker build -t my-node-app .
docker run -d -p 3000:3000 my-node-app
Key Dockerfile Instructions
FROM — sets the base image. Every Dockerfile must start with FROM.
FROM ubuntu:22.04
FROM python:3.12-slim
FROM node:20-alpine # Alpine variants are much smaller
RUN — executes commands during the build. Each RUN creates a new layer.
# Bad: creates 3 layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean
# Good: single layer, smaller image
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY vs ADD — both copy files into the image. Prefer COPY; it is simpler and more predictable. ADD can extract archives and fetch URLs, which is rarely needed.
COPY . /app
COPY --chown=node:node package.json /app/
WORKDIR — sets the working directory for subsequent instructions.
WORKDIR /app
# All following RUN, COPY, CMD instructions use /app as the base
EXPOSE — documents which port the container listens on. It does not actually publish the port; you still need -p at runtime.
EXPOSE 8080
ENV — sets environment variables available during build and at runtime.
ENV NODE_ENV=production
ENV PORT=3000
CMD vs ENTRYPOINT — both define what runs when the container starts, but they work differently:
# CMD: can be overridden entirely at runtime
CMD ["node", "server.js"]
# docker run my-app -> runs: node server.js
# docker run my-app python script.py -> runs: python script.py
# ENTRYPOINT: always runs; CMD provides default arguments
ENTRYPOINT ["python"]
CMD ["app.py"]
# docker run my-app -> runs: python app.py
# docker run my-app script.py -> runs: python script.py
Rule of thumb: use CMD for applications. Use ENTRYPOINT when your container should always run a specific executable.
A Python Example
FROM python:3.12-slim
WORKDIR /app
# Install dependencies first for layer caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]
6. Docker Compose: Multi-Container Applications
Real applications usually involve multiple services: a web server, a database, a cache, a message queue. Docker Compose lets you define and run multi-container applications with a single YAML file. You can validate your compose files with our YAML Validator.
Basic docker-compose.yml
services:
web:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
pgdata:
Essential Compose Commands
# Start all services (build images if needed)
docker compose up -d
# View running services
docker compose ps
# View logs from all services
docker compose logs
# Follow logs from a specific service
docker compose logs -f web
# Stop all services
docker compose down
# Stop and remove volumes (deletes database data!)
docker compose down -v
# Rebuild images and restart
docker compose up -d --build
# Scale a service to multiple instances
docker compose up -d --scale web=3
A Full-Stack Example: React + API + Database
services:
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
depends_on:
- api
api:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "8000:8000"
environment:
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=myapp
depends_on:
db:
condition: service_healthy
volumes:
- ./backend:/app # Mount source code for live reloading
- /app/node_modules # Prevent overwriting node_modules
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: dev
POSTGRES_PASSWORD: devpass
volumes:
- db-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev"]
interval: 5s
timeout: 5s
retries: 5
volumes:
db-data:
Note the healthcheck on the database and condition: service_healthy on the API. This ensures the API does not start until the database is actually ready to accept connections, not just when the container has started.
7. Volumes and Data Persistence
Containers are ephemeral. When you remove a container, its filesystem is gone. Volumes solve this by persisting data outside the container lifecycle.
Types of Mounts
# Named volume (managed by Docker, best for databases)
docker run -d -v pgdata:/var/lib/postgresql/data postgres:16
# Bind mount (maps a host directory, best for development)
docker run -d -v /home/user/project:/app my-app
# tmpfs mount (in-memory, for sensitive data that should not persist)
docker run -d --tmpfs /app/temp my-app
Managing Volumes
# List all volumes
docker volume ls
# Inspect a volume
docker volume inspect pgdata
# Create a volume manually
docker volume create my-data
# Remove a volume
docker volume rm my-data
# Remove all unused volumes
docker volume prune
Volume Tips
- Use named volumes for database storage — Docker manages the path and they survive container removal
- Use bind mounts during development for live code reloading
- Never store important data only inside a container — always use a volume
- Back up volume data regularly:
docker run --rm -v pgdata:/data -v $(pwd):/backup alpine tar czf /backup/pgdata.tar.gz /data
8. Docker Networking
Docker creates isolated networks so containers can communicate with each other securely.
Default Networks
# List networks
docker network ls
# Default output:
NETWORK ID NAME DRIVER SCOPE
abc123 bridge bridge local # default for standalone containers
def456 host host local # shares host network stack
ghi789 none null local # no networking
Container Communication
Containers on the same Docker network can reach each other by container name:
# Create a custom network
docker network create my-network
# Run two containers on the same network
docker run -d --name api --network my-network my-api
docker run -d --name db --network my-network postgres:16
# From the api container, you can now reach the database at:
# hostname: db
# port: 5432
# No need to publish the database port to the host!
In Docker Compose, all services are automatically placed on a shared network. You reference other services by their service name:
# In docker-compose.yml, if you have services "web" and "db",
# the web service can connect to the database using:
DATABASE_URL=postgres://user:pass@db:5432/myapp
# ^^ service name as hostname
Port Mapping Explained
# -p HOST_PORT:CONTAINER_PORT
docker run -p 8080:80 nginx
# Map to specific host interface
docker run -p 127.0.0.1:8080:80 nginx # only accessible from localhost
# Let Docker choose the host port
docker run -p 80 nginx # Docker assigns a random available port
9. Best Practices
Use Multi-Stage Builds
Multi-stage builds dramatically reduce image size by separating the build environment from the runtime environment:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production (only includes the built output)
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
# Result: ~25MB instead of ~400MB with node_modules
Use .dockerignore
Like .gitignore, a .dockerignore file excludes files from the build context. This speeds up builds and prevents secrets from leaking into images:
# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
.env
.env.*
Dockerfile
docker-compose.yml
README.md
.vscode
coverage
.nyc_output
Minimize Image Size
- Use
alpineorslimbase images:python:3.12-slimis 130MB vspython:3.12at 900MB - Combine RUN commands to reduce layers
- Clean up package manager caches in the same RUN instruction
- Use multi-stage builds to exclude build dependencies
- Copy only what you need (not
COPY . .if you can be specific)
Security Best Practices
# Run as non-root user
FROM node:20-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --chown=appuser:appgroup . .
USER appuser
CMD ["node", "server.js"]
- Never run as root inside containers — create and switch to a non-root user
- Never put secrets in images — use environment variables or Docker secrets
- Pin image versions — use
node:20.11-alpineinstead ofnode:latest - Scan images for vulnerabilities —
docker scout cves my-image - Use read-only filesystems where possible —
docker run --read-only my-app - Keep images up to date — rebuild regularly to get security patches
Layer Caching
Docker caches each layer. If a layer has not changed, Docker reuses it. Order your Dockerfile instructions from least to most frequently changed:
# Good order (dependencies change less often than source code):
COPY package.json package-lock.json ./ # changes rarely
RUN npm ci # cached if package.json unchanged
COPY . . # changes frequently
# Bad order (invalidates cache every time you edit code):
COPY . . # changes frequently
RUN npm ci # must re-run even if deps unchanged
10. Common Troubleshooting
Container Exits Immediately
# Check the logs for errors
docker logs my-container
# Run interactively to debug
docker run -it my-image /bin/sh
# Common causes:
# - Application crashes on startup
# - CMD/ENTRYPOINT not set correctly
# - Missing environment variables or config files
Port Already in Use
# Error: "bind: address already in use"
# Find what is using the port
lsof -i :8080 # macOS/Linux
netstat -tulnp # Linux
# Use a different host port
docker run -p 8081:80 nginx
Permission Denied
# Error: "Got permission denied while trying to connect to the Docker daemon"
# Solution: Add your user to the docker group
sudo usermod -aG docker $USER
# Then log out and log back in
# Or use sudo
sudo docker ps
Out of Disk Space
# See what is consuming space
docker system df
# Clean up unused resources
docker system prune # removes stopped containers, unused networks, dangling images
docker system prune -a # also removes all unused images (not just dangling)
docker volume prune # removes unused volumes
Cannot Connect Between Containers
# Make sure containers are on the same network
docker network inspect my-network
# Use container names (not localhost or 127.0.0.1)
# Wrong: DATABASE_URL=postgres://localhost:5432/db
# Right: DATABASE_URL=postgres://db-container:5432/db
# Check if the target container is actually running
docker ps
# Test connectivity from inside a container
docker exec -it my-app ping db-container
Build Cache Issues
# Force a clean rebuild (no cache)
docker build --no-cache -t my-app .
# Rebuild in Compose
docker compose build --no-cache
docker compose up -d --build
Container Using Too Much Memory or CPU
# Monitor resource usage
docker stats
# Set resource limits
docker run -d --memory=512m --cpus=1.5 my-app
# In docker-compose.yml:
services:
web:
image: my-app
deploy:
resources:
limits:
memory: 512M
cpus: '1.5'
What to Learn Next
- Docker Hub — publish your own images and use automated builds
- Docker Swarm or Kubernetes — orchestrate containers across multiple servers
- CI/CD pipelines — build and deploy Docker images automatically with GitHub Actions, GitLab CI, or Jenkins
- Docker secrets — manage sensitive data like API keys and passwords securely
The best way to learn Docker is to containerize a project you already have. Start with a simple Dockerfile, get it running, then add Docker Compose when you need a database or cache.
Related: Check out our Docker Commands Cheat Sheet and our YAML Validator for docker-compose files.