Docker Volumes & Storage: The Complete Guide for 2026

Published February 12, 2026 · 28 min read

Containers are ephemeral by design. Stop a container, remove it, and every file it created disappears. That is exactly what you want for stateless application processes — but databases, file uploads, configuration state, and logs need to survive container restarts and replacements. Docker's storage subsystem — volumes, bind mounts, and tmpfs mounts — solves this problem by decoupling data from the container lifecycle.

This guide covers everything about Docker storage: how the container filesystem works, the three mount types and when to use each, volume drivers for remote storage, backup and restore strategies, Docker Compose volume configuration, storage drivers under the hood, database container patterns, and performance optimization. Every section includes practical commands you can run immediately.

⚙ Quick reference: Build Docker run commands with our Docker Run Builder and validate Compose files with the Docker Compose Validator.

Table of Contents

  1. Understanding the Docker Storage Model
  2. Container Filesystem Layers
  3. Docker Volumes: Creation, Mounting, and Managing
  4. Bind Mounts: When and How to Use Them
  5. tmpfs Mounts: In-Memory Storage
  6. Named Volumes vs Anonymous Volumes
  7. Volume Drivers and Plugins
  8. Volume Backup and Restore Strategies
  9. Docker Compose Volume Configuration
  10. Storage Drivers: overlay2, btrfs, and More
  11. Volumes in Multi-Container Applications
  12. Database Container Storage Patterns
  13. Comparison: Volumes vs Bind Mounts vs tmpfs
  14. Best Practices and Performance Tips
  15. Frequently Asked Questions

1. Understanding the Docker Storage Model

Docker uses a layered filesystem architecture. Every Docker image is composed of read-only layers, each representing a set of filesystem changes from a Dockerfile instruction. When you run a container, Docker adds a thin writable layer on top of the image layers. All changes the container makes — creating files, modifying existing ones, deleting files — happen in this writable layer.

This architecture is storage-efficient: multiple containers from the same image share all read-only layers and only store their individual differences in the writable layer. However, it creates a fundamental problem: the writable layer is tied to the container's lifecycle. Remove the container, and the writable layer and all its data are gone.

Docker provides three mechanisms to persist data beyond the container lifecycle:

# See what storage driver and filesystem Docker is using
docker info | grep -i storage
docker info | grep -i "Docker Root Dir"

# Inspect the layers of an image
docker image inspect nginx --format '{{.RootFS.Layers}}'

# See disk usage by images, containers, and volumes
docker system df
docker system df -v    # verbose: per-item breakdown

2. Container Filesystem Layers

Every Docker image is built from a series of layers. Each layer represents a Dockerfile instruction that modifies the filesystem. The layers are stacked using a union filesystem (typically OverlayFS) so that the container sees a single, merged view of all layers.

When a container writes to a file that exists in a lower read-only layer, the storage driver uses copy-on-write (CoW): the file is copied from the read-only layer into the writable layer, then the modification happens on the copy. This means the first write to a large existing file is expensive because the entire file must be copied. Subsequent writes to the same file are fast because they hit the writable layer directly.

# Run a container and write data to the writable layer
docker run -d --name demo nginx
docker exec demo sh -c 'echo "hello" > /tmp/test.txt'

# The file exists inside the container
docker exec demo cat /tmp/test.txt
# Output: hello

# See the size of the writable layer
docker ps -s --format "table {{.Names}}\t{{.Size}}"

# Remove the container — data is gone
docker rm -f demo
# /tmp/test.txt no longer exists anywhere

The writable layer has important limitations: it is tied to the specific container, it cannot be easily shared between containers, it uses the storage driver (which may have lower write performance than direct filesystem access), and it is deleted when the container is removed. For any data that matters, you need a mount.

3. Docker Volumes: Creation, Mounting, and Managing

Volumes are the preferred mechanism for persisting data in Docker. They are fully managed by Docker, independent of the container lifecycle, and work consistently across Linux, macOS, and Windows. A volume is a directory on the host filesystem that Docker manages inside /var/lib/docker/volumes/.

Creating Volumes

# Create a named volume
docker volume create mydata

# Create a volume with specific options
docker volume create --driver local \
  --opt type=none \
  --opt device=/data/myapp \
  --opt o=bind \
  myapp-data

# List all volumes
docker volume ls

# Inspect a volume (shows mountpoint, driver, labels)
docker volume inspect mydata

Mounting Volumes into Containers

# Mount with -v flag (short syntax)
docker run -d -v mydata:/app/data nginx

# Mount with --mount flag (explicit syntax, recommended)
docker run -d --mount source=mydata,target=/app/data nginx

# Mount as read-only
docker run -d --mount source=mydata,target=/app/data,readonly nginx
docker run -d -v mydata:/app/data:ro nginx

# Auto-create volume if it doesn't exist (with -v)
docker run -d -v newvolume:/app/data nginx

The --mount flag is more explicit and less error-prone than -v. With -v, if you misspell a volume name, Docker silently creates a new empty volume. With --mount, Docker returns an error if the volume does not exist (unless you include the volume-nocopy option).

Managing Volumes

# List volumes with filtering
docker volume ls --filter driver=local
docker volume ls --filter dangling=true    # unused volumes

# Remove a specific volume
docker volume rm mydata

# Remove all unused volumes (not mounted by any container)
docker volume prune

# Remove ALL unused data (volumes, images, containers, networks)
docker system prune --volumes

# Get volume usage information
docker system df -v | grep -A5 "VOLUME NAME"

4. Bind Mounts: When and How to Use Them

Bind mounts map a specific file or directory on the host directly into the container. Unlike volumes, bind mounts depend on the host's directory structure and are not managed by Docker. The host path must exist before the mount (with --mount) or Docker will create it as a directory (with -v).

# Bind mount with --mount (recommended)
docker run -d \
  --mount type=bind,source=/home/user/app,target=/app \
  node:20

# Bind mount with -v (short syntax)
docker run -d -v /home/user/app:/app node:20

# Read-only bind mount
docker run -d \
  --mount type=bind,source=/etc/config,target=/app/config,readonly \
  myapp

# Mount a single file
docker run -d \
  --mount type=bind,source=/home/user/app.conf,target=/etc/app.conf \
  myapp

When to Use Bind Mounts

Bind Mount Gotchas

Bind mounts can obscure the container directory. If you mount an empty host directory to /app, anything the image had in /app is hidden. This is different from volumes, where Docker copies the container directory contents into a new empty volume. Bind mounts also introduce portability issues since they depend on the host path existing. File permission mismatches between the host user and the container user are a common source of "permission denied" errors.

# Common development bind mount with live reloading
docker run -d \
  -v $(pwd)/src:/app/src \
  -v $(pwd)/package.json:/app/package.json \
  -v /app/node_modules \
  -p 3000:3000 \
  node:20 npm run dev

# The /app/node_modules anonymous volume prevents the host
# bind mount from hiding container-installed dependencies

5. tmpfs Mounts: In-Memory Storage

A tmpfs mount stores data in the host's memory only. The data is never written to the host filesystem and is lost when the container stops. This is ideal for sensitive data that should not persist on disk, such as session tokens, temporary encryption keys, or scratch space for computations.

# tmpfs mount with --mount
docker run -d \
  --mount type=tmpfs,target=/app/tmp,tmpfs-size=100m,tmpfs-mode=1777 \
  myapp

# tmpfs mount with --tmpfs (simpler syntax)
docker run -d --tmpfs /app/tmp:size=100m myapp

# Multiple tmpfs mounts
docker run -d \
  --mount type=tmpfs,target=/app/tmp \
  --mount type=tmpfs,target=/app/secrets,tmpfs-size=10m \
  myapp

tmpfs Options

tmpfs mounts are only available on Linux. They do not work on Docker Desktop for Mac or Windows, and they cannot be shared between containers.

6. Named Volumes vs Anonymous Volumes

Docker supports two kinds of volumes: named and anonymous. The only difference is whether you give the volume a human-readable name.

# Named volume: you choose the name
docker run -d -v postgres-data:/var/lib/postgresql/data postgres:16

# Anonymous volume: Docker generates a random hash name
docker run -d -v /var/lib/postgresql/data postgres:16

# List volumes — named volumes are easy to identify
docker volume ls
# DRIVER    VOLUME NAME
# local     postgres-data
# local     a1b2c3d4e5f6789...

Always use named volumes for data you care about. Anonymous volumes are created when a Dockerfile declares a VOLUME instruction or when you use -v with only a container path. They are nearly impossible to identify later, easy to accidentally delete with docker volume prune, and hard to back up. Named volumes are self-documenting, easy to manage, and survive docker compose down (but not docker compose down -v).

# VOLUME in Dockerfile creates an anonymous volume at run time
# Dockerfile:
# FROM postgres:16
# VOLUME /var/lib/postgresql/data

# Override the anonymous volume with a named one
docker run -d -v my-pg-data:/var/lib/postgresql/data postgres:16

# Find which volumes a container uses
docker inspect mycontainer --format '{{json .Mounts}}' | jq

7. Volume Drivers and Plugins

The default local volume driver stores data on the host filesystem. Volume plugins extend Docker with drivers that store data on remote systems: NFS shares, cloud storage, distributed filesystems, and more.

# Create a volume using NFS (with the local driver)
docker volume create --driver local \
  --opt type=nfs \
  --opt o=addr=192.168.1.100,rw,nfsvers=4 \
  --opt device=:/exports/data \
  nfs-data

# Create a volume using CIFS/SMB
docker volume create --driver local \
  --opt type=cifs \
  --opt device=//192.168.1.100/share \
  --opt o=addr=192.168.1.100,username=user,password=pass \
  smb-data

# Use the volume
docker run -d -v nfs-data:/app/data myapp

Popular Volume Plugins

# Install a volume plugin
docker plugin install rexray/ebs

# Create a volume with the plugin
docker volume create --driver rexray/ebs \
  --opt size=50 \
  --opt volumeType=gp3 \
  ebs-data

# Use it like any other volume
docker run -d -v ebs-data:/app/data myapp

8. Volume Backup and Restore Strategies

Docker does not provide built-in backup commands for volumes. The standard approach uses a temporary container to archive the volume contents.

Backup a Volume

# Stop the container using the volume (recommended for consistency)
docker stop myapp

# Back up the volume to a tar.gz file
docker run --rm \
  -v mydata:/source:ro \
  -v $(pwd)/backups:/backup \
  ubuntu tar czf /backup/mydata-$(date +%Y%m%d).tar.gz -C /source .

# Restart the container
docker start myapp

Restore a Volume

# Create a new volume (or use existing)
docker volume create mydata-restored

# Restore from backup
docker run --rm \
  -v mydata-restored:/target \
  -v $(pwd)/backups:/backup \
  ubuntu tar xzf /backup/mydata-20260212.tar.gz -C /target

Database-Specific Backups

For databases, always prefer the database's native dump tools over filesystem-level backups. Raw file copies while the database is running risk corruption.

# PostgreSQL backup
docker exec my-postgres pg_dumpall -U postgres > backup.sql

# PostgreSQL restore
docker exec -i my-postgres psql -U postgres < backup.sql

# MySQL backup
docker exec my-mysql mysqldump -u root -p --all-databases > backup.sql

# MySQL restore
docker exec -i my-mysql mysql -u root -p < backup.sql

# MongoDB backup
docker exec my-mongo mongodump --out /dump
docker cp my-mongo:/dump ./mongo-backup

# Redis backup (trigger RDB snapshot)
docker exec my-redis redis-cli BGSAVE
docker cp my-redis:/data/dump.rdb ./redis-backup.rdb

9. Docker Compose Volume Configuration

Docker Compose manages volumes declaratively in the docker-compose.yml file. Volumes defined in the top-level volumes key are created when you run docker compose up and persist across restarts.

# docker-compose.yml with named volumes
services:
  web:
    image: nginx:alpine
    volumes:
      - static-files:/usr/share/nginx/html:ro
    ports:
      - "80:80"

  app:
    build: .
    volumes:
      - static-files:/app/static          # shared volume
      - app-data:/app/data                 # app-specific volume
      - ./config:/app/config:ro            # bind mount for config
    depends_on:
      - db

  db:
    image: postgres:16
    volumes:
      - pg-data:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: secret

volumes:
  static-files:       # named volume (Docker manages it)
  app-data:           # named volume
  pg-data:            # named volume for database

Volume Options in Compose

# Advanced volume configuration
volumes:
  # Volume with specific driver options
  nfs-data:
    driver: local
    driver_opts:
      type: nfs
      o: addr=192.168.1.100,rw,nfsvers=4
      device: ":/exports/data"

  # External volume (must exist before compose up)
  shared-data:
    external: true

  # Volume with custom name
  db-data:
    name: production-db-data

  # Volume with labels
  app-uploads:
    labels:
      com.example.description: "User uploaded files"
      com.example.backup: "daily"

Compose Volume Lifecycle

# Create and start services (volumes are created automatically)
docker compose up -d

# Stop services (volumes are preserved)
docker compose down

# Stop services AND delete volumes (DESTRUCTIVE)
docker compose down -v

# List project volumes
docker volume ls --filter label=com.docker.compose.project=myproject

10. Storage Drivers: overlay2, btrfs, and More

Storage drivers handle how image layers and the container writable layer are stored and managed on disk. The storage driver affects performance, disk usage, and available features. This is separate from volume drivers — storage drivers manage the container filesystem, while volume drivers manage volumes.

Available Storage Drivers

Driver Filesystem Status Best For
overlay2 xfs, ext4 Default, recommended All general workloads
btrfs btrfs Supported Hosts already using btrfs, snapshot workflows
zfs zfs Supported Hosts already using ZFS, advanced snapshot needs
fuse-overlayfs any Supported Rootless Docker
devicemapper direct-lvm Deprecated Legacy RHEL/CentOS systems
# Check current storage driver
docker info | grep "Storage Driver"

# Change storage driver in /etc/docker/daemon.json
{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.size=20G"
  ]
}

# Restart Docker after changing
sudo systemctl restart docker

# WARNING: Changing storage drivers makes existing images
# and containers inaccessible. Back up first.

overlay2 works by maintaining two layers: a lowerdir (read-only image layers) and an upperdir (container writable layer), merged into a single view. It is fast for read-heavy workloads and uses disk efficiently through layer sharing. For write-heavy workloads with large files, consider mounting a volume, which bypasses the storage driver entirely.

11. Volumes in Multi-Container Applications

In multi-container architectures, volumes serve as the shared data layer between services. Common patterns include sharing static assets, passing files between processing stages, and centralizing logs.

# Multi-container app sharing a volume
services:
  # Worker generates reports
  worker:
    build: ./worker
    volumes:
      - reports:/app/output

  # Web server serves the reports
  web:
    image: nginx:alpine
    volumes:
      - reports:/usr/share/nginx/html/reports:ro
    ports:
      - "80:80"

  # Backup sidecar archives reports nightly
  backup:
    image: ubuntu
    volumes:
      - reports:/data:ro
      - ./backups:/backups
    command: >
      sh -c 'while true; do
        tar czf /backups/reports-$$(date +%Y%m%d).tar.gz -C /data .;
        sleep 86400;
      done'

volumes:
  reports:

Init Containers Pattern

# Use an init container to populate a volume before the main app starts
services:
  init-config:
    image: busybox
    volumes:
      - config:/config
    command: sh -c 'cp /defaults/* /config/ 2>/dev/null; echo "Config initialized"'
    restart: "no"

  app:
    build: .
    volumes:
      - config:/app/config:ro
    depends_on:
      init-config:
        condition: service_completed_successfully

volumes:
  config:

12. Database Container Storage Patterns

Databases are the most critical use case for Docker volumes. Getting the storage configuration right is the difference between a reliable system and catastrophic data loss.

PostgreSQL

services:
  postgres:
    image: postgres:16
    volumes:
      - pg-data:/var/lib/postgresql/data
      - ./init-scripts:/docker-entrypoint-initdb.d:ro
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password
    shm_size: '256m'    # increase shared memory for better performance

volumes:
  pg-data:

secrets:
  db_password:
    file: ./secrets/db_password.txt

MySQL / MariaDB

services:
  mysql:
    image: mysql:8
    volumes:
      - mysql-data:/var/lib/mysql
      - ./my-custom.cnf:/etc/mysql/conf.d/custom.cnf:ro
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    environment:
      MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root_pw
      MYSQL_DATABASE: myapp

volumes:
  mysql-data:

Redis with Persistence

services:
  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes --appendfsync everysec
    volumes:
      - redis-data:/data

volumes:
  redis-data:

MongoDB

services:
  mongo:
    image: mongo:7
    volumes:
      - mongo-data:/data/db
      - mongo-config:/data/configdb
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: secret

volumes:
  mongo-data:
  mongo-config:

Key rules for database containers: always use named volumes, never share a database data volume between multiple database instances, use shm_size for PostgreSQL, mount init scripts to /docker-entrypoint-initdb.d/ for first-run setup, and use database-native backup tools rather than volume-level copies.

13. Comparison: Volumes vs Bind Mounts vs tmpfs

Feature Volumes Bind Mounts tmpfs Mounts
Managed by Docker Yes No No
Storage location /var/lib/docker/volumes/ Anywhere on host Host memory (RAM)
Persists after container removal Yes Yes (on host) No
Shareable between containers Yes Yes No
Pre-populates with image data Yes (new empty volumes) No (host content wins) No
Supports remote/cloud drivers Yes No No
Works on all OS Yes Yes Linux only
Performance Native (Linux), good (Desktop) Native (Linux), slow (Desktop) Fastest (RAM-backed)
CLI management docker volume commands Host filesystem tools None (ephemeral)
Best for Databases, persistent app data Dev source code, host configs Secrets, temp scratch space

Rule of thumb: Use volumes by default for any persistent data. Use bind mounts only for development source code and host-managed configuration files. Use tmpfs for sensitive temporary data that must never touch disk.

14. Best Practices and Performance Tips

Volume Best Practices

Performance Tips

# Optimal development setup with volume for dependencies
services:
  app:
    build: .
    volumes:
      - ./src:/app/src                # bind mount: source code
      - node-modules:/app/node_modules # volume: dependencies (fast)
      - build-cache:/app/.cache        # volume: build cache (fast)
    ports:
      - "3000:3000"

volumes:
  node-modules:
  build-cache:

Security Considerations

Frequently Asked Questions

What is the difference between Docker volumes and bind mounts?

Docker volumes are managed entirely by Docker and stored in /var/lib/docker/volumes/. They are the preferred mechanism for persisting data because they work on both Linux and Windows, can be managed with Docker CLI commands, can be safely shared among multiple containers, and support volume drivers for remote storage. Bind mounts map a specific host directory or file into the container and depend on the host's filesystem structure. Use volumes for data that containers own (databases, uploads) and bind mounts for data you need to edit from the host (source code during development).

How do I back up and restore Docker volumes?

To back up a Docker volume, run a temporary container that mounts both the volume and a host directory, then use tar to archive the volume contents: docker run --rm -v myvolume:/data -v $(pwd):/backup ubuntu tar czf /backup/myvolume-backup.tar.gz -C /data . To restore, run a similar container that extracts the archive into the volume: docker run --rm -v myvolume:/data -v $(pwd):/backup ubuntu tar xzf /backup/myvolume-backup.tar.gz -C /data. For databases, prefer using the database's native dump tools (pg_dump, mysqldump) for consistent backups rather than copying raw data files.

Why does my Docker container lose data when it restarts?

Container data is lost because Docker containers use a writable layer on top of the image's read-only layers. This writable layer is ephemeral — it is deleted when the container is removed (docker rm). Stopping and starting a container preserves the writable layer, but removing and recreating it (as docker compose up may do) deletes it. To persist data across container lifecycles, mount a Docker volume or bind mount to any directory that contains important data. For databases, always mount a volume to the data directory.

Can multiple Docker containers share the same volume?

Yes, multiple containers can mount the same Docker volume simultaneously. This is useful for sharing data between containers, such as a web server reading files written by a background worker. However, Docker does not provide file locking or synchronization — if multiple containers write to the same files concurrently, you risk data corruption. For databases, never share a data volume between multiple database instances. Use the :ro (read-only) mount option for containers that should not modify the data.

What is the best storage driver for Docker in production?

The overlay2 storage driver is the recommended default for almost all production environments. It is stable, performant, supported on all major Linux distributions, and has been Docker's default since Docker CE 17.06. For specific use cases, btrfs and zfs drivers offer native snapshotting and compression but require their respective filesystems on the host. Avoid the devicemapper driver in loopback mode as it has poor performance. Check your current driver with docker info | grep Storage.

Conclusion

Docker storage is the bridge between ephemeral containers and persistent data. Volumes are the right choice for almost everything: databases, file uploads, application state, and shared data between services. Bind mounts fill the development niche where you need host-to-container file synchronization. tmpfs handles the edge case of sensitive data that must never touch disk.

The most important takeaways: always use named volumes for persistent data, never rely on the container writable layer for anything important, use database-native backup tools instead of filesystem copies, and on Docker Desktop, prefer volumes over bind mounts for dependencies and build caches. Get these fundamentals right, and your containerized data will be safe, performant, and easy to manage.

⚙ Essential tools: Build docker run commands with the Docker Run Builder, validate Compose YAML with the Docker Compose Validator, and convert configs with JSON to YAML.

Learn More

Related Resources

Docker Complete Guide
Master Docker from basics to advanced production patterns
Docker Compose Guide
Multi-container orchestration with Docker Compose
Docker Networking Guide
Bridge, overlay, macvlan drivers, DNS, and service discovery
Docker for Beginners
Practical beginner's guide to Docker containers
Docker Run Builder
Build docker run commands interactively
Docker Compose Validator
Validate and lint Docker Compose YAML files
Embed this article on your site

Copy the iframe code below:

<iframe src="https://devtoolbox.dedyn.io/blog/docker-volumes-storage-guide" width="100%" height="800" frameborder="0"></iframe>