Docker Container Exits Immediately: Troubleshooting Guide (2026)
Need to validate your startup config before re-running? Use Dockerfile Builder to sanity-check Dockerfile directives and command shape quickly.
If a container starts and exits in seconds, the fix is rarely random retries. A short command-driven workflow reveals whether the problem is startup command behavior, missing runtime dependencies, environment mismatch, or a Dockerfile assumption that fails at runtime.
Table of contents
1. Read the first failure signals
Start with the minimum evidence set: container status, exit code, and startup logs.
docker ps -a --no-trunc
docker logs <container_name_or_id>
docker inspect <container_name_or_id> --format '{{.State.ExitCode}} {{.State.Error}}'
| Signal | Likely issue | First next step |
|---|---|---|
| Exited (0) | Main process finished immediately | Confirm CMD/ENTRYPOINT is intended to stay alive |
| Exited (1) or non-zero code | Runtime error at startup | Read full logs and test command manually |
| No logs or vague logs | Entrypoint wrapper failure | Override entrypoint and run shell interactively |
2. Run the fast triage commands
These commands surface 80% of startup failures quickly.
# inspect image startup config
docker image inspect <image:tag> --format '{{json .Config.Entrypoint}} {{json .Config.Cmd}}'
# run interactively with shell to inspect files and command paths
docker run --rm -it --entrypoint sh <image:tag>
# inside shell, verify executable path and permissions
which your-start-command || true
ls -la /app
docker run image exits but docker run --entrypoint sh image stays alive, the startup command chain is the primary suspect.
3. Verify Dockerfile command assumptions
Containers often fail because CMD/ENTRYPOINT assumptions changed during refactors.
- Use JSON-array form for predictable argument handling:
CMD ["node", "server.js"] - Confirm copied files exist at runtime path after
WORKDIRandCOPY - Avoid relying on shell features unless explicitly using a shell entrypoint
- Validate executable permissions for scripts and binaries
If your service is Python-based, compare against patterns in Python Testing with Pytest Guide and check dependency/runtime parity before changing app logic.
4. Check runtime and environment mismatches
A container may build successfully but fail at startup due to missing env vars, secrets, or incompatible runtime flags.
# compare expected env keys
docker inspect <container> --format '{{json .Config.Env}}' | jq .
# test with explicit env values
docker run --rm -e APP_ENV=prod -e PORT=8080 <image:tag>
# if using compose
docker compose config
5. Define done and prevent repeat failures
A startup triage is complete when:
- Exit reason is stated clearly with evidence (logs + exit code).
- Startup succeeds in a clean run with expected environment values.
- Dockerfile command and runtime assumptions are documented in code comments or runbooks.
- A quick regression check exists (health check, smoke test, or container-start CI step).
FAQ
Why does a container exit with code 0 but still fail my deployment?
Exit code 0 means the configured process ended successfully, not that the service stayed available. Long-running services must keep the main process alive.
Should I use restart: always as the first fix?
No. First capture logs and root cause. Restart policies are stability controls after startup behavior is correct.
What if logs are empty?
Run with an interactive shell entrypoint and invoke the startup command manually. Empty logs usually indicate early command resolution or permission failure.
How do I prevent this from recurring?
Add startup smoke checks in CI and keep CMD/ENTRYPOINT changes visible in code review with explicit runtime assumptions.