Kubernetes Pods: The Complete Guide for 2026
Every workload in Kubernetes runs inside a Pod. Whether you are deploying a single-container web server or a complex multi-container data pipeline, the Pod is the fundamental building block you need to understand. It controls how your containers share networking, storage, and lifecycle — and getting Pod configuration right is the difference between a stable production cluster and one that constantly restarts, runs out of memory, or drops traffic.
This guide covers Pods from the basic YAML spec through advanced patterns like sidecar containers, probes, security contexts, and disruption budgets. If you are new to Kubernetes, start with our Kubernetes Complete Guide first. If you are coming from Docker Compose, see our Docker Compose guide for comparison.
Table of Contents
- What is a Pod
- Pod vs Container
- Pod YAML Spec
- Multi-Container Pods
- Pod Lifecycle
- Liveness, Readiness, and Startup Probes
- Resource Requests and Limits
- Environment Variables and ConfigMaps
- Secrets
- Volumes
- Pod Networking
- Pod Security Context
- Pod Disruption Budgets
- Debugging Pods
- Best Practices
- Frequently Asked Questions
1. What is a Pod
A Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in your cluster. A Pod encapsulates one or more containers, shared storage volumes, a unique network IP address, and options that govern how the containers should run.
Think of a Pod as a logical host for your application. All containers within a Pod share the same network namespace — they communicate over localhost and share the same IP address and port space. They can also share volumes, which makes it easy to pass data between containers without network calls.
In practice, most Pods contain a single container. Multi-container Pods are used when containers are tightly coupled and need to share resources, such as a web server with a log-shipping sidecar.
2. Pod vs Container
The distinction between a Pod and a container is critical for understanding Kubernetes:
- Container — a single isolated process running from a container image (like a Docker container). It has its own filesystem but, within a Pod, shares the network namespace with other containers.
- Pod — a Kubernetes wrapper around one or more containers. It provides shared networking (all containers share one IP), shared storage (volumes mounted into multiple containers), and a shared lifecycle (all containers are scheduled together on the same node).
You never create a bare container in Kubernetes. Even a single-container workload runs inside a Pod. The Pod abstraction exists because some workloads require tightly coupled helper containers (sidecars) that must share the same network and storage.
# A Docker container vs a Kubernetes Pod
# Docker: run a single container
docker run -d -p 8080:80 nginx:1.27-alpine
# Kubernetes: same container, wrapped in a Pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.27-alpine
ports:
- containerPort: 80
3. Pod YAML Spec
Every Kubernetes resource follows the same structure: apiVersion, kind, metadata, and spec. Here is a complete Pod spec with the most important fields annotated:
apiVersion: v1
kind: Pod
metadata:
name: my-app # unique name within the namespace
namespace: default # namespace (defaults to "default")
labels: # key-value pairs for selection
app: my-app
version: v2.1.0
annotations: # non-identifying metadata
description: "Main application pod"
spec:
restartPolicy: Always # Always | OnFailure | Never
serviceAccountName: my-app-sa # RBAC identity
terminationGracePeriodSeconds: 30
initContainers: # run before main containers
- name: init-db
image: busybox:1.36
command: ['sh', '-c', 'until nc -z db-svc 5432; do sleep 2; done']
containers:
- name: app # main application container
image: my-app:2.1.0
ports:
- containerPort: 8080
protocol: TCP
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
volumeMounts:
- name: config-vol
mountPath: /etc/app/config
readOnly: true
volumes:
- name: config-vol
configMap:
name: app-config
In production, you rarely create Pods directly. Instead, you use a Deployment, StatefulSet, or Job that manages Pods for you. But understanding the Pod spec is essential because those higher-level resources all embed a Pod template with this same structure.
4. Multi-Container Pods
Multi-container Pods use well-established patterns. Each pattern solves a specific problem by co-locating helper containers with the main application container.
Sidecar Pattern
A sidecar container runs alongside the main container to provide supporting functionality like logging, monitoring, or proxying. Both containers share the same network and can share volumes.
apiVersion: v1
kind: Pod
metadata:
name: web-with-logging
spec:
containers:
- name: web
image: nginx:1.27-alpine
volumeMounts:
- name: logs
mountPath: /var/log/nginx
- name: log-shipper # sidecar: ships logs to central system
image: fluent/fluent-bit:3.0
volumeMounts:
- name: logs
mountPath: /var/log/nginx
readOnly: true
volumes:
- name: logs
emptyDir: {}
Init Container Pattern
Init containers run sequentially before the main containers start. Each must complete successfully before the next one begins. They are ideal for setup tasks that should not be part of the main container image.
apiVersion: v1
kind: Pod
metadata:
name: app-with-init
spec:
initContainers:
- name: wait-for-db # 1st: wait for database
image: busybox:1.36
command: ['sh', '-c', 'until nc -z postgres-svc 5432; do echo waiting; sleep 2; done']
- name: run-migrations # 2nd: run database migrations
image: my-app:2.1.0
command: ['python', 'manage.py', 'migrate']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
containers:
- name: app # main container starts after init completes
image: my-app:2.1.0
ports:
- containerPort: 8080
Ambassador Pattern
An ambassador container proxies network connections from the main container to external services, simplifying the main application's connection logic.
apiVersion: v1
kind: Pod
metadata:
name: app-with-proxy
spec:
containers:
- name: app
image: my-app:2.1.0
env:
- name: DB_HOST
value: "localhost" # connects to ambassador on localhost
- name: DB_PORT
value: "5432"
- name: cloud-sql-proxy # ambassador: handles auth + encryption
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.8
args:
- "--port=5432"
- "my-project:us-central1:my-db"
securityContext:
runAsNonRoot: true
5. Pod Lifecycle
A Pod moves through a series of phases from creation to termination. Understanding these phases is essential for debugging and writing correct probes.
- Pending — The Pod has been accepted by the cluster but is not yet running. It may be waiting for scheduling, image pulls, or init containers to complete.
- Running — The Pod has been bound to a node and at least one container is running, starting, or restarting.
- Succeeded — All containers terminated with exit code 0 and will not be restarted. Common for Jobs and batch workloads.
- Failed — All containers have terminated and at least one exited with a non-zero code or was killed by the system.
- Unknown — The state cannot be determined, usually because communication with the node has been lost.
# Check Pod phase
kubectl get pod my-app
# NAME READY STATUS RESTARTS AGE
# my-app 1/1 Running 0 5m
# See detailed conditions and events
kubectl describe pod my-app
# Watch Pod status changes in real time
kubectl get pod my-app -w
Pod Termination Sequence
When a Pod is deleted, Kubernetes follows a graceful shutdown sequence:
- Pod is set to
Terminatingstate and removed from Service endpoints - PreStop hook runs (if defined)
SIGTERMis sent to the main process in each container- Kubernetes waits up to
terminationGracePeriodSeconds(default 30s) - If the process has not exited,
SIGKILLis sent
apiVersion: v1
kind: Pod
metadata:
name: graceful-app
spec:
terminationGracePeriodSeconds: 60 # give app 60s to shut down
containers:
- name: app
image: my-app:2.1.0
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 5"] # wait for traffic to drain
6. Liveness, Readiness, and Startup Probes
Probes let Kubernetes monitor container health and make automated decisions about traffic routing and restarts.
Liveness Probe
Detects when a container is stuck. If the probe fails, Kubernetes kills and restarts the container.
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15 # wait before first check
periodSeconds: 20 # check every 20 seconds
timeoutSeconds: 5 # timeout for each check
failureThreshold: 3 # restart after 3 consecutive failures
Readiness Probe
Detects when a container is ready to accept traffic. If the probe fails, the Pod is removed from Service endpoints but is not restarted.
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3 # remove from Service after 3 failures
successThreshold: 1 # add back after 1 success
Startup Probe
Designed for slow-starting containers. It disables liveness and readiness probes until the startup probe succeeds, preventing premature restarts during initialization.
startupProbe:
httpGet:
path: /healthz
port: 8080
failureThreshold: 30 # 30 * 10s = 300s max startup time
periodSeconds: 10
Probe Types
# HTTP GET (most common for web services)
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: X-Health-Check
value: "kubernetes"
# TCP Socket (for services without HTTP endpoints)
livenessProbe:
tcpSocket:
port: 5432
# Exec Command (for custom health logic)
livenessProbe:
exec:
command:
- /bin/sh
- -c
- pg_isready -U postgres
# gRPC (for gRPC services)
livenessProbe:
grpc:
port: 50051
7. Resource Requests and Limits
Resource requests and limits control how much CPU and memory a container can use. Getting these right prevents out-of-memory kills and noisy neighbor problems.
- Requests — the guaranteed minimum. The scheduler uses requests to decide which node has enough capacity for the Pod.
- Limits — the hard ceiling. If a container exceeds its memory limit, it is killed (OOMKilled). If it exceeds its CPU limit, it is throttled.
containers:
- name: api
image: my-app:2.1.0
resources:
requests:
memory: "256Mi" # guaranteed 256 MiB RAM
cpu: "250m" # guaranteed 0.25 CPU cores
limits:
memory: "512Mi" # killed if exceeds 512 MiB
cpu: "1000m" # throttled above 1 CPU core
CPU units: 1000m (millicores) equals 1 full CPU core. 250m is a quarter of a core. Memory units: Mi (mebibytes) and Gi (gibibytes) are binary units; M and G are decimal.
Quality of Service (QoS) Classes
Kubernetes assigns a QoS class based on how you set requests and limits:
- Guaranteed — requests equal limits for all containers. Highest priority, last to be evicted.
- Burstable — at least one container has requests set but requests differ from limits. Medium priority.
- BestEffort — no requests or limits set on any container. First to be evicted under memory pressure.
8. Environment Variables and ConfigMaps
Environment variables are the standard way to inject configuration into containers. Kubernetes supports inline values, ConfigMap references, Secret references, and field references.
containers:
- name: app
image: my-app:2.1.0
env:
# Inline value
- name: LOG_LEVEL
value: "info"
# From a ConfigMap key
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: db_host
# From a Secret key
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
# From Pod metadata (downward API)
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ConfigMap as Environment Variables
# Create a ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
db_host: "postgres-svc.default.svc.cluster.local"
db_port: "5432"
log_level: "info"
max_connections: "100"
---
# Inject all keys as environment variables
containers:
- name: app
image: my-app:2.1.0
envFrom:
- configMapRef:
name: app-config # all keys become env vars
ConfigMap as a Volume
containers:
- name: app
image: my-app:2.1.0
volumeMounts:
- name: config-vol
mountPath: /etc/app/config
readOnly: true
volumes:
- name: config-vol
configMap:
name: app-config
# Each key in the ConfigMap becomes a file
# /etc/app/config/db_host contains "postgres-svc.default.svc.cluster.local"
9. Secrets
Secrets store sensitive data like passwords, tokens, and TLS certificates. They are base64-encoded (not encrypted) by default, so always enable encryption at rest in production clusters.
# Create a Secret from the command line
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password='s3cretP@ss'
# Or as YAML (values must be base64-encoded)
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
username: YWRtaW4= # echo -n 'admin' | base64
password: czNjcmV0UEBzcw== # echo -n 's3cretP@ss' | base64
---
# Use stringData to avoid manual base64 encoding
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
stringData:
username: admin
password: s3cretP@ss
# Mount as environment variables
containers:
- name: app
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
# Or mount as files
volumeMounts:
- name: secret-vol
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secret-vol
secret:
secretName: db-secret
10. Volumes
Pods are ephemeral — when a Pod is deleted, its filesystem is gone. Volumes provide persistent and shared storage for containers.
emptyDir
A temporary directory that exists for the lifetime of the Pod. Shared between containers in the same Pod, deleted when the Pod is removed.
volumes:
- name: scratch
emptyDir: {} # uses node disk
- name: cache
emptyDir:
medium: Memory # uses RAM (tmpfs), faster but limited
sizeLimit: 256Mi
hostPath
Mounts a file or directory from the host node's filesystem. Use with caution — it ties your Pod to a specific node and can be a security risk.
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
type: Socket # File | Directory | Socket | CharDevice
PersistentVolumeClaim (PVC)
The recommended way to use persistent storage. A PVC requests storage from the cluster, and Kubernetes provisions a PersistentVolume to satisfy it.
# PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce # RWO: single node read-write
resources:
requests:
storage: 10Gi
storageClassName: standard # use cluster's default storage class
---
# Pod using the PVC
apiVersion: v1
kind: Pod
metadata:
name: db
spec:
containers:
- name: postgres
image: postgres:16-alpine
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvc
11. Pod Networking
Every Pod gets its own unique IP address within the cluster. Containers inside the same Pod communicate over localhost. Pods communicate with other Pods using their IP addresses or through Services.
- Same Pod — containers share
localhost. A sidecar on port 9090 is reachable atlocalhost:9090from the main container. - Pod to Pod — every Pod can reach every other Pod by IP without NAT (flat network model).
- Pod to Service — Services provide stable DNS names (
my-svc.my-namespace.svc.cluster.local) that route to healthy Pod endpoints.
# Expose a Pod through a Service
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
spec:
selector:
app: my-app # routes to Pods with this label
ports:
- port: 80 # Service port
targetPort: 8080 # Pod container port
type: ClusterIP # internal only (default)
# DNS resolution from within a Pod
# Short name (same namespace):
curl http://my-app-svc:80
# Fully qualified (cross-namespace):
curl http://my-app-svc.production.svc.cluster.local:80
12. Pod Security Context
Security contexts define privilege and access control settings for Pods and individual containers. They are essential for running secure workloads.
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
securityContext: # Pod-level settings
runAsNonRoot: true # prevent running as root
runAsUser: 1000 # UID for all containers
runAsGroup: 3000 # GID for all containers
fsGroup: 2000 # group owner of mounted volumes
seccompProfile:
type: RuntimeDefault # enable default seccomp profile
containers:
- name: app
image: my-app:2.1.0
securityContext: # container-level overrides
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true # prevent writes to container filesystem
capabilities:
drop:
- ALL # drop all Linux capabilities
add:
- NET_BIND_SERVICE # only add what is needed
volumeMounts:
- name: tmp
mountPath: /tmp # writable tmp since root fs is read-only
volumes:
- name: tmp
emptyDir: {}
Key security settings:
runAsNonRoot: true— rejects the Pod if the container tries to run as UID 0readOnlyRootFilesystem: true— prevents malware from writing to the container filesystemallowPrivilegeEscalation: false— prevents a process from gaining more privileges than its parentcapabilities.drop: [ALL]— removes all Linux capabilities, then add back only what is needed
13. Pod Disruption Budgets
A PodDisruptionBudget (PDB) limits the number of Pods that can be simultaneously unavailable during voluntary disruptions like node drains, cluster upgrades, or autoscaler scale-downs.
# Ensure at least 2 Pods are always running
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-app-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: my-app
---
# Or specify maximum unavailable
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-app-pdb
spec:
maxUnavailable: 1 # at most 1 Pod down at a time
selector:
matchLabels:
app: my-app
PDBs only protect against voluntary disruptions (node drain, cluster upgrade). They do not protect against involuntary disruptions like hardware failures or kernel panics. For high availability, combine PDBs with multiple replicas spread across nodes using topologySpreadConstraints or pod anti-affinity.
14. Debugging Pods
View Logs
# Current logs
kubectl logs my-pod
# Follow logs in real time
kubectl logs -f my-pod
# Logs from a specific container (multi-container Pod)
kubectl logs my-pod -c sidecar
# Previous container logs (after a crash/restart)
kubectl logs my-pod --previous
# Last 100 lines
kubectl logs my-pod --tail=100
# Logs from the last hour
kubectl logs my-pod --since=1h
Execute Commands
# Open a shell inside a running container
kubectl exec -it my-pod -- /bin/sh
# Run a specific command
kubectl exec my-pod -- cat /etc/config/app.conf
# Exec into a specific container
kubectl exec -it my-pod -c sidecar -- /bin/sh
Describe Pod
# Detailed Pod info: events, conditions, container statuses
kubectl describe pod my-pod
# Key things to check in the output:
# - Events section: scheduling failures, image pull errors, probe failures
# - Conditions: PodScheduled, Initialized, ContainersReady, Ready
# - Container State: Waiting (reason), Running, Terminated (exit code)
Common Debugging Scenarios
# Pod stuck in Pending: check events for scheduling failures
kubectl describe pod my-pod | grep -A 10 Events
# Pod in CrashLoopBackOff: check logs from previous crash
kubectl logs my-pod --previous
# Pod in ImagePullBackOff: verify image name and registry auth
kubectl describe pod my-pod | grep "Failed"
# Debug networking from inside a Pod
kubectl exec -it my-pod -- nslookup my-service
kubectl exec -it my-pod -- wget -qO- http://my-service:8080/health
# Create a temporary debug Pod in the same namespace
kubectl run debug --image=busybox:1.36 --rm -it -- sh
15. Best Practices
- Always set resource requests and limits. Without them, a single Pod can consume all node resources and starve other workloads.
- Use liveness and readiness probes. Liveness probes recover from deadlocks; readiness probes prevent traffic from reaching unhealthy Pods.
- Run as non-root. Set
runAsNonRoot: trueandallowPrivilegeEscalation: falsein the security context. - Use read-only root filesystem. Set
readOnlyRootFilesystem: trueand mount writable directories explicitly withemptyDirvolumes. - Pin image tags. Never use
:latestin production. Use specific version tags or SHA digests for reproducibility. - Use Deployments, not bare Pods. Deployments provide rolling updates, rollbacks, replica management, and self-healing.
- Set Pod Disruption Budgets. Ensure your service remains available during cluster maintenance and upgrades.
- Handle SIGTERM gracefully. Your application should listen for SIGTERM and shut down cleanly within the grace period.
- Use ConfigMaps for configuration, Secrets for credentials. Keep configuration separate from container images for portability across environments.
- Set appropriate terminationGracePeriodSeconds. Match this to your application's shutdown time — the default 30 seconds is often too short for long-running requests.
Frequently Asked Questions
What is the difference between a Pod and a container in Kubernetes?
A container is a single isolated process running from a container image. A Pod is a Kubernetes abstraction that wraps one or more containers that share the same network namespace (same IP address and port space), can share volumes, and are always co-located and co-scheduled on the same node. The Pod is the smallest deployable unit in Kubernetes. You never run a bare container in Kubernetes — you always run it inside a Pod. Most Pods contain a single container, but multi-container Pods are used for sidecars, init containers, and ambassador patterns.
What are the Pod lifecycle phases in Kubernetes?
A Pod goes through five phases: Pending (accepted by the cluster but containers are not yet running, possibly waiting for scheduling or image pulls), Running (at least one container is running or starting), Succeeded (all containers terminated successfully with exit code 0 and will not restart), Failed (all containers terminated and at least one exited with a non-zero exit code), and Unknown (the Pod state cannot be determined, usually due to a communication error with the node). You can check the phase with kubectl get pod and see detailed conditions with kubectl describe pod.
What is the difference between liveness, readiness, and startup probes?
Liveness probes detect when a container is stuck or deadlocked and needs a restart. If a liveness probe fails, Kubernetes kills and restarts the container. Readiness probes detect when a container is ready to accept traffic. If a readiness probe fails, the Pod is removed from Service endpoints so it stops receiving requests, but the container is not restarted. Startup probes are used for slow-starting containers. They disable liveness and readiness checks until the startup probe succeeds, preventing Kubernetes from killing a container that is still initializing. All three probes support HTTP GET, TCP socket, gRPC, and exec command checks.
How do init containers work in Kubernetes Pods?
Init containers run before the main application containers start and must complete successfully before the next init container or main containers begin. They run sequentially in the order listed in the Pod spec. Common uses include waiting for a database to be ready, running database migrations, downloading configuration files, setting up file permissions, or populating a shared volume with data. Init containers can use different images from the main containers and can have elevated privileges for setup tasks. If an init container fails, Kubernetes restarts the Pod according to its restart policy.
What is a Pod Disruption Budget and why should I use one?
A PodDisruptionBudget (PDB) limits the number of Pods that can be simultaneously unavailable during voluntary disruptions like node drains, cluster upgrades, or autoscaler scale-downs. You specify either minAvailable (minimum number of Pods that must remain running) or maxUnavailable (maximum number that can be down at once). For example, if you have 5 replicas and set maxUnavailable to 1, Kubernetes will only evict one Pod at a time during a rolling node drain, ensuring at least 4 are always serving traffic. PDBs do not protect against involuntary disruptions like hardware failures.
Conclusion
Pods are the atomic unit of Kubernetes. Every concept in the platform — Deployments, Services, scaling, networking — builds on top of the Pod abstraction. Mastering Pod configuration means understanding how containers share resources, how probes keep your applications healthy, how resource limits prevent outages, and how security contexts protect your cluster.
Start by writing a basic single-container Pod spec, then progressively add probes, resource limits, security contexts, and ConfigMaps. When you are ready for production, use Deployments to manage Pod replicas and PodDisruptionBudgets to maintain availability during maintenance.
Learn More
- Kubernetes: The Complete Guide — comprehensive Kubernetes guide covering clusters, Deployments, Services, and more
- Docker Compose: The Complete Guide — orchestrate multi-container applications with Compose
- Docker: The Complete Developer's Guide — images, Dockerfiles, security, CI/CD, and performance
- Kubernetes YAML Generator — generate deployment, service, and ingress manifests
- Docker Cheat Sheet — essential Docker and Compose commands quick reference