GitLab CI/CD: The Complete Guide for 2026
GitLab CI/CD is one of the most powerful continuous integration and delivery platforms available. It lives inside your GitLab repository, configured entirely through a single .gitlab-ci.yml file. Every commit triggers a pipeline that can build, test, scan, and deploy your application — with zero external tooling required. This guide covers everything from basic pipeline syntax to advanced patterns like multi-project pipelines, review apps, and security scanning.
1. What Is GitLab CI/CD
GitLab CI/CD is a built-in automation engine that executes pipelines defined in your repository. Unlike external CI systems, it requires no plugins, no separate servers, and no integration setup — it works out of the box with every GitLab project.
The core model is straightforward:
Commit pushed
→ GitLab reads .gitlab-ci.yml
→ Pipeline created with stages
→ Jobs assigned to runners
→ Runners execute jobs in containers
→ Results reported back to GitLab
Key concepts:
- Pipeline — the top-level container for all CI/CD work triggered by a single event (push, merge request, schedule, API call).
- Stage — a phase of the pipeline (e.g., build, test, deploy). Stages run sequentially; all jobs in a stage must pass before the next stage starts.
- Job — a task that runs in a single runner. Jobs within the same stage run in parallel by default.
- Runner — an agent that picks up and executes jobs. Runs in Docker containers, Kubernetes pods, or directly on a shell.
2. The .gitlab-ci.yml File
Every pipeline is defined by a single file: .gitlab-ci.yml in the repository root. Here is a minimal working example:
# .gitlab-ci.yml
stages:
- build
- test
- deploy
build-app:
stage: build
image: node:20-alpine
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
run-tests:
stage: test
image: node:20-alpine
script:
- npm ci
- npm test
deploy-production:
stage: deploy
image: alpine:latest
script:
- apk add --no-cache rsync openssh-client
- rsync -avz dist/ user@server:/var/www/app/
only:
- main
The file supports global keywords that apply to all jobs unless overridden:
image— default Docker image for all jobsbefore_script/after_script— commands that run before/after every job'sscriptvariables— global environment variablesdefault— set defaults forimage,retry,timeout,tags, and more
default:
image: node:20-alpine
retry: 2
timeout: 10 minutes
variables:
NODE_ENV: production
before_script:
- echo "Pipeline $CI_PIPELINE_ID started"
3. Stages and Jobs
Stages define the order of execution. Jobs within the same stage run in parallel (if enough runners are available). If any job fails, the pipeline stops at that stage by default.
stages:
- lint
- build
- test
- deploy
eslint:
stage: lint
image: node:20-alpine
script:
- npm ci
- npx eslint src/
stylelint:
stage: lint
image: node:20-alpine
script:
- npm ci
- npx stylelint "src/**/*.css"
# Both lint jobs run in parallel.
# Build only starts after both pass.
compile:
stage: build
image: node:20-alpine
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 hour
Use needs to create a DAG (directed acyclic graph) that breaks out of the strict stage ordering:
unit-tests:
stage: test
needs: ["compile"] # Starts as soon as compile finishes
script:
- npm test
integration-tests:
stage: test
needs: ["compile"] # Also starts when compile finishes
script:
- npm run test:integration
The needs keyword can significantly speed up pipelines by letting jobs start as soon as their actual dependencies are met, rather than waiting for the entire previous stage.
4. GitLab Runners
Runners are the machines that execute your jobs. GitLab offers three scopes:
- Shared runners — provided by GitLab.com, available to all projects. Good for most workloads.
- Group runners — registered to a group, available to all projects within it. Ideal for organizations.
- Project runners — dedicated to a specific project. Use when you need custom hardware or isolated environments.
Installing and registering a runner:
# Install GitLab Runner (Linux)
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
sudo apt install gitlab-runner
# Register the runner
sudo gitlab-runner register \
--url https://gitlab.com/ \
--registration-token YOUR_TOKEN \
--executor docker \
--docker-image alpine:latest \
--description "Docker Runner" \
--tag-list "docker,linux"
Executors determine how a job runs:
- Docker — runs each job in a fresh container. Most common and recommended.
- Shell — runs commands directly on the runner host. Simple but no isolation.
- Kubernetes — runs each job as a pod. Scales automatically with your cluster.
- Docker Machine — auto-provisions cloud VMs on demand (AWS, GCP, Azure).
Use tags to route jobs to specific runners:
gpu-training:
stage: build
tags:
- gpu
- linux
script:
- python train_model.py
5. Variables and Secrets
GitLab CI provides variables at multiple levels. Predefined variables are always available:
# Useful predefined variables
$CI_COMMIT_SHA # Full commit hash
$CI_COMMIT_SHORT_SHA # First 8 characters
$CI_COMMIT_BRANCH # Branch name
$CI_COMMIT_TAG # Tag name (if tag pipeline)
$CI_PIPELINE_ID # Unique pipeline ID
$CI_PROJECT_DIR # Repo checkout path
$CI_REGISTRY # Container registry URL
$CI_REGISTRY_IMAGE # Project's registry image path
$CI_ENVIRONMENT_NAME # Current environment name
$GITLAB_USER_EMAIL # Email of the user who triggered pipeline
Custom variables can be defined in the YAML, in the project UI, or at the group/instance level:
# In .gitlab-ci.yml (non-sensitive only)
variables:
DATABASE_URL: "postgres://localhost:5432/testdb"
NODE_ENV: "production"
deploy:
stage: deploy
variables:
DEPLOY_ENV: "staging" # Job-level override
script:
- echo "Deploying to $DEPLOY_ENV"
For secrets, always use the GitLab UI (Settings > CI/CD > Variables):
- Masked — value is hidden in job logs (must be at least 8 chars, no newlines).
- Protected — only available on protected branches and tags.
- Environment scope — limit a variable to a specific environment (e.g.,
production).
6. Caching and Artifacts
Caching and artifacts serve different purposes. Cache speeds up jobs by reusing downloaded dependencies. Artifacts pass files between jobs in the same pipeline.
build:
stage: build
image: node:20-alpine
cache:
key: ${CI_COMMIT_REF_SLUG} # Cache per branch
paths:
- node_modules/
policy: pull-push # pull = read, push = write
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 day
when: on_success # Only save on success
test:
stage: test
image: node:20-alpine
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull # Only read cache, don't update
script:
- npm test
artifacts:
reports:
junit: test-results.xml # GitLab parses JUnit reports
Cache key strategies:
key: ${CI_COMMIT_REF_SLUG}— cache per branchkey: files: [package-lock.json]— cache based on lock file hash (invalidates when dependencies change)key: prefix: ${CI_JOB_NAME}— separate caches per job name
7. Docker-in-Docker
Building Docker images inside CI requires special setup. GitLab supports two approaches:
# Approach 1: Docker-in-Docker (DinD) service
build-image:
stage: build
image: docker:24
services:
- docker:24-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
# Approach 2: Kaniko (no Docker daemon needed)
build-image-kaniko:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.22.0-debug
entrypoint: [""]
script:
- /kaniko/executor
--context $CI_PROJECT_DIR
--dockerfile $CI_PROJECT_DIR/Dockerfile
--destination $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
Kaniko is generally preferred because it does not require privileged mode, making it more secure and compatible with shared runners and Kubernetes.
8. Environments and Deployments
GitLab environments give you visibility into what is deployed where, with rollback capability and deployment history.
deploy-staging:
stage: deploy
script:
- deploy_to_server staging
environment:
name: staging
url: https://staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == "develop"
deploy-production:
stage: deploy
script:
- deploy_to_server production
environment:
name: production
url: https://example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual # Require manual approval
allow_failure: false
Use when: manual for production deployments that require human approval. The environment keyword tracks deployments in the GitLab UI under Deployments > Environments, with one-click rollback to any previous deployment.
9. Rules, Only/Except, and Workflow
The rules keyword is the modern way to control when jobs run. It replaces the older only/except syntax:
test:
stage: test
script: npm test
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event" # Run on MRs
- if: $CI_COMMIT_BRANCH == "main" # Run on main
- if: $CI_COMMIT_TAG # Run on tags
when: manual # But require manual trigger
# Changes-based rules
deploy-docs:
stage: deploy
script: mkdocs build
rules:
- changes:
- docs/**/* # Only run if docs changed
- mkdocs.yml
The workflow keyword controls pipeline-level behavior — whether a pipeline is created at all:
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_COMMIT_TAG
# All other events: no pipeline created
This prevents duplicate pipelines when both a branch push and a merge request event trigger simultaneously.
10. Multi-Project and Parent-Child Pipelines
Parent-child pipelines split a large pipeline into smaller, maintainable files:
# .gitlab-ci.yml (parent)
stages:
- triggers
trigger-frontend:
stage: triggers
trigger:
include: frontend/.gitlab-ci.yml
strategy: depend # Parent waits for child status
trigger-backend:
stage: triggers
trigger:
include: backend/.gitlab-ci.yml
strategy: depend
Multi-project pipelines trigger pipelines in other repositories:
deploy-infrastructure:
stage: deploy
trigger:
project: devops/infrastructure
branch: main
strategy: depend
This pattern is common in microservice architectures where deploying one service requires updating shared infrastructure or triggering integration tests across services.
11. Review Apps and Dynamic Environments
Review apps automatically deploy each merge request to a unique, temporary environment:
deploy-review:
stage: deploy
script:
- deploy_to_k8s review-$CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_COMMIT_REF_SLUG.review.example.com
on_stop: stop-review
auto_stop_in: 1 week
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
stop-review:
stage: deploy
script:
- teardown_k8s review-$CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
Each MR gets its own live preview URL. Reviewers can interact with the actual deployed app instead of reading code diffs. The auto_stop_in keyword automatically cleans up stale environments.
12. Security Scanning
GitLab provides built-in security scanning templates. Include them in your pipeline to scan for vulnerabilities:
include:
- template: Security/SAST.gitlab-ci.yml # Static analysis
- template: Security/Secret-Detection.gitlab-ci.yml # Secret detection
- template: Security/Dependency-Scanning.gitlab-ci.yml
# Override scanning variables if needed
sast:
variables:
SAST_EXCLUDED_PATHS: "spec,test,tests"
# DAST (Dynamic Application Security Testing)
dast:
stage: dast
image:
name: registry.gitlab.com/gitlab-org/security-products/dast
variables:
DAST_WEBSITE: https://staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
Scanning types:
- SAST — analyzes source code for security flaws without running it.
- DAST — tests a running application for vulnerabilities like XSS and SQL injection.
- Dependency scanning — checks your dependencies against known vulnerability databases.
- Secret detection — finds API keys, tokens, and passwords accidentally committed to the repo.
- Container scanning — analyzes Docker images for OS-level vulnerabilities.
Results appear in the merge request widget and the project's Security Dashboard (requires GitLab Ultimate for the full dashboard).
13. GitLab CI vs GitHub Actions vs Jenkins
| Feature | GitLab CI | GitHub Actions | Jenkins |
|---|---|---|---|
| Configuration | Single .gitlab-ci.yml | Multiple workflow files | Jenkinsfile or UI |
| Execution Model | Stage-based (sequential stages) | Job dependency graph | Pipeline stages |
| Container Registry | Built-in | GHCR (separate) | Plugin required |
| Environments | Native with rollback | Native (newer feature) | Plugin-based |
| Security Scanning | Built-in SAST/DAST | CodeQL / third-party | Plugins only |
| Self-hosting | GitLab CE/EE | GitHub Enterprise | Default (self-hosted) |
| Ecosystem | Templates, CI catalog | Actions Marketplace | Plugin ecosystem |
Choose GitLab CI if you want an all-in-one DevOps platform with built-in registry, security, and environments. Choose GitHub Actions for deep GitHub integration and community actions. Choose Jenkins when you need maximum flexibility and already run your own infrastructure.
14. Real-World Pipeline Examples
Node.js Application
stages:
- install
- quality
- build
- deploy
variables:
NODE_ENV: production
install-deps:
stage: install
image: node:20-alpine
script:
- npm ci --cache .npm
cache:
key:
files:
- package-lock.json
paths:
- .npm/
artifacts:
paths:
- node_modules/
expire_in: 1 hour
lint-and-test:
stage: quality
image: node:20-alpine
needs: ["install-deps"]
script:
- npx eslint src/
- npm test -- --coverage
coverage: '/Lines\s*:\s*(\d+\.?\d*)%/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
build-app:
stage: build
image: node:20-alpine
needs: ["install-deps"]
script:
- npm run build
artifacts:
paths:
- dist/
deploy-prod:
stage: deploy
needs: ["lint-and-test", "build-app"]
image: alpine:latest
script:
- apk add --no-cache openssh-client rsync
- rsync -avz --delete dist/ $DEPLOY_USER@$DEPLOY_HOST:/var/www/app/
environment:
name: production
url: https://myapp.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual
Python Application
stages:
- test
- build
- deploy
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.pip-cache"
.python-base:
image: python:3.12-slim
cache:
key: ${CI_COMMIT_REF_SLUG}-pip
paths:
- .pip-cache/
pytest:
extends: .python-base
stage: test
script:
- pip install -r requirements.txt
- pytest tests/ -v --junitxml=report.xml --cov=app
artifacts:
reports:
junit: report.xml
build-docker:
stage: build
image: docker:24
services:
- docker:24-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker build -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
rules:
- if: $CI_COMMIT_BRANCH == "main"
Multi-Service Docker Compose
stages:
- build
- test
- push
build-images:
stage: build
image: docker:24
services:
- docker:24-dind
script:
- docker compose build
- docker compose up -d
- docker compose ps
artifacts:
paths:
- docker-compose.yml
integration-test:
stage: test
image: docker:24
services:
- docker:24-dind
script:
- docker compose up -d
- docker compose exec -T app pytest tests/integration/
- docker compose down
15. Best Practices
- Use
rulesoveronly/except— theruleskeyword is more powerful and is the recommended approach going forward. - Set
workflow:rules— prevent duplicate pipelines for branch pushes that also create merge request events. - Use
needsfor DAG pipelines — break strict stage ordering to speed up pipelines by running jobs as soon as their dependencies are met. - Cache dependencies, not build output — use cache for
node_modulesor.pip-cache, use artifacts for build results likedist/. - Set
expire_inon artifacts — default is 30 days, which can consume storage quickly. Use short expiry for intermediary artifacts. - Use hidden jobs and
extends— prefix jobs with a dot (.template-job) to create reusable templates. Useextendsto inherit from them. - Pin image tags — use
node:20-alpineinstead ofnode:latestto avoid breaking pipelines on upstream image changes. - Use
when: manualfor production deploys — require human approval before deploying to production. - Mask sensitive variables — always mark secrets as masked and protected in the GitLab UI.
- Validate before pushing — use the GitLab CI Lint tool (CI/CD > Pipelines > CI Lint) or the
/ci/lintAPI endpoint to validate your YAML. - Keep pipelines fast — use caching, parallelize with
needs, and split large pipelines into parent-child structures. - Use
interruptible: true— mark jobs that can be safely cancelled when a newer pipeline starts on the same branch.
16. FAQ
What is GitLab CI/CD?
GitLab CI/CD is a built-in continuous integration and continuous delivery platform in GitLab. It uses a .gitlab-ci.yml file in your repository root to define pipelines that automatically build, test, and deploy your code whenever you push changes. Pipelines are composed of stages (like build, test, deploy) that run sequentially, with jobs within each stage running in parallel by default.
What is the difference between GitLab CI/CD and GitHub Actions?
GitLab CI/CD uses a single .gitlab-ci.yml file with stages and jobs, while GitHub Actions uses multiple YAML workflow files with event-driven triggers. GitLab has built-in container registry, environments, review apps, and security scanning. GitHub Actions has a larger marketplace of community actions. GitLab runners are self-hosted or shared, while GitHub provides hosted runners. GitLab pipelines follow a strict stage-based execution model, whereas GitHub Actions jobs are more flexible with dependency graphs.
How do I write a .gitlab-ci.yml file?
Create a .gitlab-ci.yml file in your repository root. Define stages (like build, test, deploy), then create jobs that reference those stages. Each job specifies a stage, a Docker image to run in, and a script with commands to execute. For example: define stages: [build, test] followed by a job like unit-tests: with stage: test, image: node:20, and script: [npm ci, npm test]. GitLab automatically detects and runs the pipeline on every push.
What are GitLab runners and how do they work?
GitLab runners are agents that execute CI/CD jobs. There are three types: shared runners (provided by GitLab for all projects), group runners (available to all projects in a group), and project-specific runners (dedicated to a single project). Runners use executors to determine how jobs run — the Docker executor runs each job in a fresh container, the Shell executor runs directly on the runner machine, and the Kubernetes executor runs jobs as pods in a cluster. Install runners with gitlab-runner install and register them with gitlab-runner register.
How do I use variables and secrets in GitLab CI?
GitLab CI variables can be defined at multiple levels: in .gitlab-ci.yml (for non-sensitive values), in project settings under CI/CD > Variables (for secrets), or at group/instance level for shared values. Variables set in the UI can be masked (hidden from logs) and protected (only available on protected branches). Predefined variables like CI_COMMIT_SHA, CI_PIPELINE_ID, and CI_PROJECT_DIR are automatically available in every job. Reference variables in scripts using $VARIABLE_NAME syntax.