Docker Multi-Stage Builds: The Complete Guide for 2026

Published February 12, 2026 · 25 min read

A typical Node.js Dockerfile produces a 1.2GB image containing npm, TypeScript, webpack, test frameworks, source files, and hundreds of dev dependencies that your production application never uses. Multi-stage builds fix this. They let you compile, test, and package in one stage, then copy only the final artifact into a minimal runtime image. The result: images that are 10–50x smaller, faster to pull, and far more secure.

This guide covers everything from basic syntax to production-grade patterns for every major language, with real Dockerfiles you can copy and adapt.

⚙ Generate Dockerfiles: Use our Dockerfile Builder to generate multi-stage Dockerfiles with best practices built in.

Table of Contents

  1. What Are Multi-Stage Builds
  2. Basic Syntax and FROM ... AS
  3. Reducing Image Size
  4. COPY --from and Copying Artifacts
  5. Multi-Stage for Every Language
  6. Named Stages and --target
  7. Build Cache Optimization
  8. Scratch and Distroless Images
  9. Security Benefits
  10. CI/CD Integration
  11. Common Mistakes and Best Practices
  12. FAQ

1. What Are Multi-Stage Builds

Before multi-stage builds, teams either shipped bloated images full of build tools, or maintained two separate Dockerfiles (one for building, one for runtime) connected by a shell script that extracted artifacts. Both approaches were painful.

Multi-stage builds solve this with a single Dockerfile that contains multiple FROM statements. Each FROM starts a new stage with its own base image and filesystem. The final stage produces your output image, and it only contains what you explicitly copy into it.

# Single-stage: everything in one image (BAD)
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install          # includes devDependencies
COPY . .
RUN npm run build        # tsc, webpack still in image
CMD ["node", "dist/server.js"]
# Result: ~1.2GB image with TypeScript, webpack, source code, tests
# Multi-stage: build and runtime separated (GOOD)
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
CMD ["node", "dist/server.js"]
# Result: ~150MB image with only compiled output

The builder stage compiles the code and is then discarded. The final image never sees the TypeScript compiler, the source code, the test files, or the devDependencies. This is not just a size optimization — it is a security improvement. Fewer files in the image means fewer potential vulnerabilities.

2. Basic Syntax and FROM ... AS

Name your stages with the AS keyword. This makes your Dockerfile readable and lets you reference stages by name in COPY --from and --target.

# syntax=docker/dockerfile:1

# Stage 1: named "deps" - install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production

# Stage 2: named "builder" - compile code
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY src/ ./src/
COPY tsconfig.json ./
RUN npm run build

# Stage 3: final production image
FROM node:20-alpine
RUN addgroup -S app && adduser -S app -G app
WORKDIR /app
COPY --from=deps --chown=app:app /app/node_modules ./node_modules
COPY --from=builder --chown=app:app /app/dist ./dist
COPY --from=builder --chown=app:app /app/package.json ./
USER app
EXPOSE 3000
CMD ["node", "dist/server.js"]

Key points about the syntax:

3. Reducing Image Size

Here is what multi-stage builds achieve in practice across different languages:

# Image size comparison: single-stage vs multi-stage

# Go application
# Single stage (golang:1.22):         ~1.1GB
# Multi-stage (golang -> scratch):    ~8MB     (99% reduction)

# Node.js application
# Single stage (node:20):             ~1.2GB
# Multi-stage (node -> node:alpine):  ~150MB   (87% reduction)

# Python application
# Single stage (python:3.12):         ~1.0GB
# Multi-stage (python -> slim):       ~180MB   (82% reduction)

# Rust application
# Single stage (rust:1.77):           ~1.4GB
# Multi-stage (rust -> scratch):      ~5MB     (99% reduction)

# Java application
# Single stage (maven:3.9-eclipse):   ~800MB
# Multi-stage (maven -> eclipse-jre): ~250MB   (69% reduction)

The size reduction comes from three sources: removing build tools (compilers, package managers), removing source code and intermediate files, and using a smaller base image for the runtime stage.

4. COPY --from and Copying Artifacts

The COPY --from instruction copies files from a named stage or from an external image.

# Copy from a named stage
COPY --from=builder /app/binary /usr/local/bin/app

# Copy from a stage by index (0-based, not recommended)
COPY --from=0 /app/binary /usr/local/bin/app

# Copy from an external image (not part of your build)
COPY --from=nginx:alpine /etc/nginx/nginx.conf /etc/nginx/
COPY --from=busybox:uclibc /bin/wget /usr/local/bin/wget

# Copy with ownership change
COPY --from=builder --chown=app:app /app/dist ./dist

# Copy multiple paths
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/config ./config
COPY --from=builder /app/migrations ./migrations

Copying from external images is useful for grabbing specific binaries without installing entire packages. Need curl in your distroless image? Copy just the binary from an Alpine image.

# Grab specific utilities from other images
FROM gcr.io/distroless/static-debian12 AS runtime
COPY --from=busybox:uclibc /bin/sh /bin/sh
COPY --from=busybox:uclibc /bin/wget /bin/wget
# Now you have a minimal image with just sh and wget

5. Multi-Stage for Every Language

Go: Build to Scratch

FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build \
    -ldflags="-s -w" \
    -o /server ./cmd/server

FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /server /server
EXPOSE 8080
ENTRYPOINT ["/server"]
# Final image: ~8MB

CGO_ENABLED=0 produces a statically linked binary that runs on scratch. The -ldflags="-s -w" strips debug information, saving 30–40% binary size. Copy CA certificates so the binary can make HTTPS calls.

Node.js: TypeScript to Alpine

FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY tsconfig.json ./
COPY src/ ./src/
RUN npm run build && npm prune --production

FROM node:20-alpine
RUN addgroup -S app && adduser -S app -G app
WORKDIR /app
COPY --from=builder --chown=app:app /app/dist ./dist
COPY --from=builder --chown=app:app /app/node_modules ./node_modules
COPY --from=builder --chown=app:app /app/package.json ./
USER app
EXPOSE 3000
CMD ["node", "dist/server.js"]
# Final image: ~150MB

Python: Wheels in Builder, Slim Runtime

FROM python:3.12 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip wheel --no-cache-dir --wheel-dir /wheels -r requirements.txt

FROM python:3.12-slim
RUN groupadd -r app && useradd -r -g app -d /app -s /sbin/nologin app
WORKDIR /app
COPY --from=builder /wheels /wheels
RUN pip install --no-cache-dir --no-index --find-links=/wheels /wheels/* \
    && rm -rf /wheels
COPY --chown=app:app . .
USER app
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:create_app()"]
# Final image: ~180MB (vs ~1GB with full python:3.12)

Building wheels in the first stage means the slim runtime stage does not need gcc, make, or header files to install compiled Python packages like psycopg2 or numpy.

Rust: Compile to Scratch

FROM rust:1.77-alpine AS builder
RUN apk add --no-cache musl-dev
WORKDIR /app
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main(){}" > src/main.rs
RUN cargo build --release && rm -rf src target/release/deps/myapp*
COPY src/ ./src/
RUN cargo build --release

FROM scratch
COPY --from=builder /app/target/release/myapp /myapp
EXPOSE 8080
ENTRYPOINT ["/myapp"]
# Final image: ~5MB

The dummy main.rs trick caches dependency compilation. When only your source code changes, Cargo rebuilds only your crate, not every dependency.

Java: Maven Build, JRE Runtime

FROM maven:3.9-eclipse-temurin-21 AS builder
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:go-offline -B
COPY src/ ./src/
RUN mvn package -DskipTests -B

FROM eclipse-temurin:21-jre-alpine
RUN addgroup -S app && adduser -S app -G app
WORKDIR /app
COPY --from=builder --chown=app:app /app/target/*.jar app.jar
USER app
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
# Final image: ~250MB (vs ~800MB with full Maven + JDK)

6. Named Stages and --target

The --target flag lets you build up to a specific stage. Combined with named stages, a single Dockerfile produces images for development, testing, and production.

# Multi-purpose Dockerfile
FROM node:20-alpine AS base
WORKDIR /app
COPY package.json package-lock.json ./

FROM base AS deps
RUN npm ci

FROM base AS dev
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]

FROM deps AS test
COPY . .
RUN npm test

FROM deps AS builder
COPY . .
RUN npm run build && npm prune --production

FROM node:20-alpine AS production
RUN addgroup -S app && adduser -S app -G app
WORKDIR /app
COPY --from=builder --chown=app:app /app/dist ./dist
COPY --from=builder --chown=app:app /app/node_modules ./node_modules
COPY --from=builder --chown=app:app /app/package.json ./
USER app
CMD ["node", "dist/server.js"]
# Build specific stages
docker build --target dev -t myapp:dev .        # Development image
docker build --target test -t myapp:test .      # Runs tests during build
docker build --target production -t myapp .     # Production image
docker build -t myapp .                         # Same as production (last stage)

This pattern is powerful for CI/CD: run --target test in the test step (the build fails if tests fail), then build the production image only after tests pass.

7. Build Cache Optimization

Docker caches each layer. When a layer's input has not changed, Docker reuses the cached result. The key to fast builds is ordering instructions so that infrequently changing layers come first.

# BAD: Cache busted on every code change
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .                   # Any source file change invalidates this
RUN npm ci                 # Re-installs all deps every time
RUN npm run build

# GOOD: Dependencies cached separately from source code
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./    # Changes rarely
RUN npm ci                                # Cached unless package.json changes
COPY . .                                  # Only source code changes bust from here
RUN npm run build

BuildKit Cache Mounts

Cache mounts persist package manager caches across builds. Instead of downloading every package on each build, Docker reuses the cached downloads.

# syntax=docker/dockerfile:1

# Go: cache module downloads and build cache
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
    go mod download
COPY . .
RUN --mount=type=cache,target=/go/pkg/mod \
    --mount=type=cache,target=/root/.cache/go-build \
    CGO_ENABLED=0 go build -o /server ./cmd/server

# Python: cache pip downloads
FROM python:3.12-slim AS builder
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
    pip install --prefix=/install -r requirements.txt

# Node.js: cache npm downloads
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci

Registry-Based Caching for CI

# Export and import cache via registry (for CI where local cache is lost)
docker buildx build \
  --cache-from type=registry,ref=myregistry.io/myapp:cache \
  --cache-to type=registry,ref=myregistry.io/myapp:cache,mode=max \
  -t myapp:latest .

# GitHub Actions cache
docker buildx build \
  --cache-from type=gha \
  --cache-to type=gha,mode=max \
  -t myapp:latest .

8. Scratch and Distroless Images

For the absolute smallest images, copy your binary into scratch (empty filesystem) or a Google distroless image.

# scratch: completely empty filesystem (0 bytes)
# Use for: Go, Rust, C/C++ with static linking
# Does NOT have: shell, libc, CA certs, timezone data, users

FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /app/server /server
USER 1000
ENTRYPOINT ["/server"]
# distroless: minimal runtime (has libc, CA certs, tzdata)
# Available for: static, base, cc, python3, java, nodejs
# Does NOT have: shell, package manager, utilities

# Go/Rust with dynamic linking
FROM gcr.io/distroless/base-debian12

# Go/Rust with static linking
FROM gcr.io/distroless/static-debian12

# Node.js application
FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app /app
WORKDIR /app
CMD ["dist/server.js"]

# Java application
FROM gcr.io/distroless/java21-debian12
COPY --from=builder /app/target/app.jar /app.jar
CMD ["app.jar"]

# Use :nonroot tag for images that run as UID 65534
FROM gcr.io/distroless/static-debian12:nonroot

Debugging distroless containers is tricky because there is no shell. Use the :debug tag variants which include BusyBox:

# Debug variant includes a shell
docker run --rm -it gcr.io/distroless/base-debian12:debug sh

# In production, use ephemeral debug containers (Kubernetes)
kubectl debug -it mypod --image=busybox --target=mycontainer

9. Security Benefits

Multi-stage builds are a security feature, not just a size optimization. By excluding build tools from the production image, you eliminate entire categories of attack vectors.

# What your production image should NOT contain:
# - Compilers (gcc, javac, tsc, rustc)
# - Package managers (apt, yum, npm, pip, cargo)
# - Build tools (make, webpack, maven)
# - Source code (.ts, .java, .go, .rs files)
# - Test files and test frameworks
# - Dev dependencies
# - Git history (.git directory)
# - Documentation and READMEs
# - CI/CD configuration files

# Verify what is in your final image:
docker run --rm myapp:latest ls -la /app/
docker run --rm myapp:latest find / -name "*.ts" 2>/dev/null
docker history myapp:latest

Combine multi-stage builds with these security practices for maximum hardening:

# Production-hardened Node.js Dockerfile
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --ignore-scripts
COPY . .
RUN npm run build && npm prune --production

FROM node:20-alpine
# Remove unnecessary packages
RUN apk --no-cache add dumb-init \
    && rm -rf /var/cache/apk/*

# Non-root user
RUN addgroup -S app && adduser -S app -G app

WORKDIR /app
COPY --from=builder --chown=app:app /app/dist ./dist
COPY --from=builder --chown=app:app /app/node_modules ./node_modules
COPY --from=builder --chown=app:app /app/package.json ./

# Security settings
USER app
EXPOSE 3000
# dumb-init handles PID 1 properly (signal forwarding, zombie reaping)
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

10. CI/CD Integration

Multi-stage builds integrate naturally into CI/CD pipelines. Use --target to run tests as part of the Docker build, and registry caching for fast builds.

# GitHub Actions: build, test, and push
name: Docker Multi-Stage Build

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Run tests (build up to test stage)
        run: docker build --target test .

      - name: Build production image
        uses: docker/build-push-action@v5
        with:
          push: true
          tags: myregistry.io/myapp:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Scan image
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: myregistry.io/myapp:${{ github.sha }}
          exit-code: 1
          severity: CRITICAL,HIGH
# GitLab CI with Kaniko (no Docker daemon required)
build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:v1.22.0-debug
    entrypoint: [""]
  script:
    - /kaniko/executor
      --context $CI_PROJECT_DIR
      --dockerfile Dockerfile
      --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
      --cache=true
      --cache-repo=$CI_REGISTRY_IMAGE/cache

11. Common Mistakes and Best Practices

Mistake 1: Copying too much into the final stage

# BAD: Copies everything including source, tests, devDeps
COPY --from=builder /app /app

# GOOD: Copy only what the runtime needs
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

Mistake 2: Missing .dockerignore

# .dockerignore - prevents unnecessary files from entering the build context
node_modules
.git
.env
*.md
Dockerfile
docker-compose*.yml
.github
coverage
tests
__pycache__
*.pyc
.vscode
.idea

Mistake 3: Not using non-root users

# BAD: Running as root in the final stage
FROM node:20-alpine
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/server.js"]

# GOOD: Explicit non-root user
FROM node:20-alpine
RUN addgroup -S app && adduser -S app -G app
WORKDIR /app
COPY --from=builder --chown=app:app /app/dist ./dist
USER app
CMD ["node", "dist/server.js"]

Mistake 4: Not leveraging dependency caching

# BAD: COPY . before dependency install (busts cache on every code change)
FROM golang:1.22 AS builder
COPY . .
RUN go mod download && go build -o /app

# GOOD: Copy dependency files first, install, then copy source
FROM golang:1.22 AS builder
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o /app

Mistake 5: Using latest tags

# BAD: Unpredictable, different image on every build
FROM node:latest AS builder

# GOOD: Pinned version, reproducible builds
FROM node:20.11-alpine AS builder

# BEST: Pinned by digest, immutable
FROM node:20.11-alpine@sha256:abc123... AS builder

Best Practices Checklist

Frequently Asked Questions

What is a Docker multi-stage build?

A Docker multi-stage build is a Dockerfile technique that uses multiple FROM statements to create separate build stages. Each stage can use a different base image and perform different tasks. The final stage copies only the artifacts it needs from previous stages, leaving behind build tools, source code, and dev dependencies. This produces dramatically smaller and more secure production images because the final image contains only what is needed to run the application.

How much can multi-stage builds reduce Docker image size?

Multi-stage builds routinely reduce image sizes by 90% or more. A Go application built in a single stage using golang:1.22 produces an image around 1GB. With a multi-stage build that compiles in the golang stage and copies the binary to a scratch image, the final image can be under 10MB. Node.js apps typically shrink from 1GB+ to under 150MB. Java apps go from 800MB+ to under 200MB with a JRE-only runtime stage. The reduction depends on how many build-time dependencies your application requires.

What is the difference between scratch and distroless base images?

The scratch image is a completely empty filesystem with nothing in it — no shell, no libc, no CA certificates. It works only for statically compiled binaries (Go with CGO_ENABLED=0, Rust with musl). Distroless images from Google contain a minimal runtime: CA certificates, timezone data, and basic libraries like glibc, but no shell, package manager, or utilities. Use scratch when you can produce a fully static binary; use distroless when your application needs a minimal runtime environment.

How does COPY --from work in multi-stage builds?

COPY --from copies files from a previous build stage or from an external image into the current stage. Reference stages by name (COPY --from=builder /app/binary /app/) or by index (COPY --from=0 /app/binary /app/). Named stages are preferred for readability. You can also copy from external images: COPY --from=nginx:alpine /etc/nginx/nginx.conf /etc/nginx/nginx.conf. This is useful for grabbing configuration files or binaries from official images without adding them as build stages.

How do I optimize Docker build cache with multi-stage builds?

Order your Dockerfile instructions from least frequently changed to most frequently changed. Copy dependency manifests (package.json, go.mod, requirements.txt) and install dependencies before copying source code. This way, dependency installation is cached and only re-runs when the manifest changes. Use BuildKit cache mounts (--mount=type=cache) to persist package manager caches across builds. For CI environments, use --cache-from with registry-based caching to share build cache across machines.

Can I target a specific stage with docker build --target?

Yes. The --target flag stops the build at a specific named stage and produces an image from that stage. This is useful for creating different images from the same Dockerfile: docker build --target dev . for a development image with debugging tools, docker build --target test . for a test image, and docker build . (no target) for the final production image. This keeps your entire build pipeline in a single Dockerfile while producing purpose-specific images for each environment.

Continue Learning

Related Resources

Docker Compose Guide
Multi-container orchestration and configuration
Docker Complete Guide
Master Docker from basics to advanced patterns
Docker Security Guide
Harden containers and secure your supply chain
Docker Volumes & Storage
Persistent data management for containers
Dockerfile Builder
Generate production-ready Dockerfiles
Docker Commands Cheat Sheet
Quick reference for all Docker commands