Nginx Reverse Proxy: The Complete Guide for 2026

Published February 12, 2026 · 30 min read

A reverse proxy sits between your clients and your backend servers. It receives every incoming request, decides which backend should handle it, forwards the request, and returns the response. Nginx is the most widely deployed reverse proxy on the internet — it powers over 30% of all web servers and handles reverse proxying for companies from small startups to Netflix and Cloudflare.

This guide covers everything you need to configure nginx as a production reverse proxy: basic proxying, load balancing algorithms, SSL termination, WebSocket support, caching, rate limiting, security hardening, and troubleshooting the errors you will inevitably encounter.

⚙ Tip: If your upstream app or admin port is only reachable on a private network, you can safely access it via SSH port forwarding. Use our SSH Tunnel Builder to generate -L/-R/-D commands with keepalives and ProxyJump.

Table of Contents

  1. What Is a Reverse Proxy
  2. Installing Nginx
  3. Basic Reverse Proxy Configuration
  4. Upstream Blocks and Load Balancing
  5. SSL/TLS Termination with Let's Encrypt
  6. WebSocket Proxying
  7. Caching
  8. Rate Limiting
  9. Health Checks
  10. Proxy Headers
  11. Security Hardening
  12. Multiple Virtual Hosts
  13. HTTP/2 and Gzip
  14. Common Troubleshooting
  15. Production Best Practices
  16. FAQ

1. What Is a Reverse Proxy and Why Use Nginx

A forward proxy hides clients from servers (like a VPN or corporate proxy). A reverse proxy hides servers from clients. Your users connect to nginx, not directly to your application. This provides several critical benefits:

Nginx excels as a reverse proxy because of its event-driven architecture. A single nginx worker process can handle thousands of concurrent connections with minimal memory. Apache uses a thread-per-connection model that consumes far more resources under load.

2. Installing Nginx

Use the official nginx repository for the latest stable version rather than your distribution's potentially outdated package.

# Ubuntu/Debian
sudo apt update && sudo apt install -y nginx

# RHEL/CentOS/Rocky
sudo yum install -y epel-release && sudo yum install -y nginx

# Verify, enable, and start
nginx -v
sudo systemctl enable --now nginx
sudo nginx -t    # test configuration syntax

3. Basic Reverse Proxy Configuration

The simplest reverse proxy forwards all requests to a single backend. The proxy_pass directive is the core of every nginx reverse proxy configuration.

# /etc/nginx/conf.d/app.conf
server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Trailing slash matters. The behavior of proxy_pass changes based on whether the URI has a trailing slash:

# Request: GET /api/users

# No URI in proxy_pass - passes the full original URI
location /api/ {
    proxy_pass http://backend;
    # Forwards: GET /api/users -> backend receives /api/users
}

# URI in proxy_pass (trailing slash) - strips the location prefix
location /api/ {
    proxy_pass http://backend/;
    # Forwards: GET /api/users -> backend receives /users
}

# Explicit path rewrite
location /v2/ {
    proxy_pass http://backend/api/v2/;
    # Forwards: GET /v2/users -> backend receives /api/v2/users
}

4. Upstream Blocks and Load Balancing

Upstream blocks define groups of backend servers. Nginx distributes requests across them using configurable algorithms.

# Round-robin (default) - equal distribution
upstream backend {
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

# Weighted round-robin - send more traffic to stronger servers
upstream backend {
    server 10.0.0.1:3000 weight=5;   # gets 5x more requests
    server 10.0.0.2:3000 weight=3;
    server 10.0.0.3:3000 weight=1;   # least traffic
}

# Least connections - routes to server with fewest active connections
upstream backend {
    least_conn;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

# IP hash - same client always goes to the same server (sticky sessions)
upstream backend {
    ip_hash;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

# Server options
upstream backend {
    server 10.0.0.1:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.3:3000 backup;     # only used if all others are down
    server 10.0.0.4:3000 down;       # temporarily disabled
}

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

max_fails=3 fail_timeout=30s means: if a server fails 3 times within 30 seconds, mark it as unavailable for 30 seconds before trying again. This provides basic passive health checking.

5. SSL/TLS Termination with Let's Encrypt

SSL termination at the reverse proxy means nginx handles all TLS encryption and decryption. Your backend servers receive plain HTTP, simplifying their configuration and offloading CPU-intensive cryptographic operations.

# Step 1: Install Certbot and obtain certificate
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d app.example.com

# Step 2: Full SSL configuration
server {
    listen 80;
    server_name app.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    # Certificates from Let's Encrypt
    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;

    # Modern TLS configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/app.example.com/chain.pem;

    # Session resumption for performance
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    # HSTS - tell browsers to always use HTTPS
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Auto-renewal (certbot adds a systemd timer automatically)
# Verify: sudo systemctl list-timers | grep certbot

6. WebSocket Proxying

WebSocket connections start as an HTTP upgrade request. Nginx does not forward upgrade headers by default, so you must configure them explicitly.

# Map to handle both regular HTTP and WebSocket connections
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    # ... SSL config ...

    # WebSocket endpoint
    location /ws/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # WebSocket connections are long-lived
        proxy_read_timeout 86400s;
        proxy_send_timeout 86400s;
    }

    # Regular HTTP traffic
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

The map block sets $connection_upgrade to "upgrade" when the client sends an Upgrade header and to "close" for normal requests. The proxy_read_timeout 86400s (24 hours) prevents nginx from closing idle WebSocket connections.

7. Caching

Nginx can cache backend responses and serve them directly for subsequent requests, dramatically reducing backend load and response times.

# Define cache zone in http block (/etc/nginx/nginx.conf)
http {
    proxy_cache_path /var/cache/nginx levels=1:2
                     keys_zone=app_cache:10m max_size=1g
                     inactive=60m use_temp_path=off;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    location /api/ {
        proxy_pass http://backend;
        proxy_cache app_cache;
        proxy_cache_valid 200 10m;       # cache 200 responses for 10 min
        proxy_cache_valid 404 1m;        # cache 404 for 1 min
        proxy_cache_use_stale error timeout updating http_500 http_502;
        proxy_cache_lock on;             # prevent cache stampede
        proxy_cache_methods GET HEAD;    # never cache POST/PUT/DELETE

        add_header X-Cache-Status $upstream_cache_status;
        # HIT = from cache, MISS = from backend, STALE = stale content
    }

    # Bypass cache for authenticated requests
    location /api/user/ {
        proxy_pass http://backend;
        proxy_cache app_cache;
        proxy_cache_bypass $http_authorization;
        proxy_no_cache $http_authorization;
    }
}

8. Rate Limiting

Rate limiting protects your backend from abuse, brute-force attacks, and traffic spikes. Nginx uses the leaky bucket algorithm to smooth out bursts.

# Define rate limit zones in http block
http {
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    # General API rate limit with burst allowance
    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        limit_req_status 429;
        proxy_pass http://backend;
    }

    # Strict rate limit for authentication endpoints
    location /api/auth/ {
        limit_req zone=login_limit burst=5 nodelay;
        limit_req_status 429;
        proxy_pass http://backend;
    }

    # Connection limit for downloads
    location /downloads/ {
        limit_conn conn_limit 5;
        limit_conn_status 429;
        proxy_pass http://backend;
    }
}

The burst parameter allows temporary spikes above the rate limit. nodelay processes burst requests immediately rather than queuing them. Use stricter limits on authentication endpoints to prevent brute-force attacks.

9. Health Checks

Nginx open source provides passive health checks through max_fails and fail_timeout. The server is marked down only after real requests fail.

# Passive health checks (nginx open source)
upstream backend {
    server 10.0.0.1:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.3:3000 backup;
}

server {
    location / {
        proxy_pass http://backend;
        proxy_next_upstream error timeout http_500 http_502 http_503;
        proxy_next_upstream_tries 2;     # try max 2 servers before failing
        proxy_next_upstream_timeout 10s; # spend max 10s on retries
        proxy_connect_timeout 5s;
        proxy_read_timeout 30s;
    }
}

# DIY active health check with a cron job or external tool
# This script removes failed backends from an upstream config file
#!/bin/bash
# /usr/local/bin/healthcheck.sh
for server in 10.0.0.1 10.0.0.2 10.0.0.3; do
    if ! curl -sf --max-time 3 "http://${server}:3000/health" > /dev/null; then
        echo "[$(date)] $server is DOWN" >> /var/log/nginx/healthcheck.log
    fi
done

proxy_next_upstream tells nginx to automatically retry the request on another server when the current one returns an error, times out, or returns a 500-series response. This gives your users transparent failover.

10. Proxy Headers

When nginx proxies a request, the backend sees nginx's IP address instead of the client's. Proxy headers preserve the original client information.

location / {
    proxy_pass http://backend;

    # Essential headers
    proxy_set_header Host $host;                              # original domain
    proxy_set_header X-Real-IP $remote_addr;                  # client IP
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # proxy chain
    proxy_set_header X-Forwarded-Proto $scheme;               # http or https
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;

    # Timeouts
    proxy_connect_timeout 10s;    # establish connection to backend
    proxy_send_timeout 30s;       # send request body to backend
    proxy_read_timeout 30s;       # read response from backend

    # Buffers
    proxy_buffering on;
    proxy_buffer_size 4k;         # response headers
    proxy_buffers 8 16k;          # response body (8 x 16k)
    proxy_busy_buffers_size 32k;
}

Your backend application should trust these headers to identify clients. In Express.js, set app.set('trust proxy', true). In Django, configure SECURE_PROXY_SSL_HEADER and USE_X_FORWARDED_HOST. Without trusting the proxy, your app will see nginx's IP for every request.

11. Security Hardening

# Hide nginx version from responses and error pages
server_tokens off;
# Before: Server: nginx/1.26.2
# After:  Server: nginx

# Security headers
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self'" always;

# Deny access to hidden files and common attack paths
location ~ /\. { deny all; return 404; }
location ~* /(wp-admin|wp-login|xmlrpc\.php|phpmyadmin) { deny all; return 404; }

# IP-based access control for admin endpoints
location /admin/ {
    allow 10.0.0.0/8;
    allow 192.168.1.0/24;
    deny all;
    proxy_pass http://backend;
}

# CORS headers for API endpoints
location /api/ {
    if ($request_method = OPTIONS) {
        add_header Access-Control-Allow-Origin "https://app.example.com";
        add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE";
        add_header Access-Control-Allow-Headers "Authorization, Content-Type";
        return 204;
    }
    add_header Access-Control-Allow-Origin "https://app.example.com" always;
    proxy_pass http://backend;
}

12. Multiple Virtual Hosts / Applications

Nginx can proxy multiple applications from a single server using separate server blocks per domain or path-based routing on one domain.

# Path-based routing - multiple apps on one domain
server {
    listen 443 ssl http2;
    server_name example.com;

    # Frontend SPA
    location / {
        proxy_pass http://127.0.0.1:3000;
    }

    # API backend
    location /api/ {
        proxy_pass http://127.0.0.1:4000;
    }

    # Admin panel - IP restricted
    location /admin/ {
        allow 10.0.0.0/8;
        deny all;
        proxy_pass http://127.0.0.1:5000;
    }

    # Static files served directly (no proxying)
    location /static/ {
        alias /var/www/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }
}

# Subdomain-based routing - separate server blocks per domain
# /etc/nginx/conf.d/api.conf
server {
    listen 443 ssl http2;
    server_name api.example.com;
    ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

13. HTTP/2 and Gzip Configuration

# HTTP/2 is enabled by adding 'http2' to the listen directive
# listen 443 ssl http2;
# No other changes needed - nginx handles protocol negotiation

# Gzip compression in http block
http {
    gzip on;
    gzip_vary on;
    gzip_proxied any;          # compress proxied responses too
    gzip_comp_level 5;         # 1-9, 5 is good balance of speed vs size
    gzip_min_length 256;       # don't compress tiny responses
    gzip_types
        text/plain
        text/css
        text/javascript
        application/javascript
        application/json
        application/xml
        application/xml+rss
        image/svg+xml
        text/xml;

    # Disable gzip for old browsers
    gzip_disable "msie6";
}

# For pre-compressed files (build step generates .gz files)
location /static/ {
    gzip_static on;    # serve .gz files if they exist
    alias /var/www/static/;
    expires 1y;
    add_header Cache-Control "public, immutable";
}

gzip_proxied any is critical for reverse proxy setups. Without it, nginx will not compress responses received from backend servers, since those responses technically come from a proxy rather than directly from nginx.

14. Common Troubleshooting

502 Bad Gateway

Nginx received the request but could not get a valid response from the backend.

# Check if backend is running
systemctl status myapp
docker ps | grep myapp
curl -v http://127.0.0.1:3000/health

# Check nginx error log for the specific error
sudo tail -50 /var/log/nginx/error.log
# "connect() failed (111: Connection refused)" = backend is not running
# "no live upstreams" = all upstream servers are down
# "upstream prematurely closed connection" = backend crashed mid-request

# Common fixes:
# 1. Backend not listening on the right address
#    App listens on 127.0.0.1:3000 but proxy_pass uses container IP
# 2. Firewall blocking the connection
sudo ufw allow from 127.0.0.1 to any port 3000
# 3. SELinux blocking nginx from making network connections
sudo setsebool -P httpd_can_network_connect 1

504 Gateway Timeout

# Backend is too slow - increase timeouts
location /api/reports/ {
    proxy_pass http://backend;
    proxy_connect_timeout 10s;
    proxy_send_timeout 120s;
    proxy_read_timeout 120s;   # increase for slow endpoints
}

413 Request Entity Too Large

# Increase client body size limit (default is 1MB)
client_max_body_size 50m;    # in server or location block

Buffer Errors

# "upstream sent too big header" errors
proxy_buffer_size 8k;            # increase header buffer (default 4k)
proxy_buffers 16 16k;            # more and larger body buffers
proxy_busy_buffers_size 32k;

# For very large responses (file downloads, exports)
proxy_max_temp_file_size 1024m;  # allow large temp files
# Or disable buffering entirely for streaming
proxy_buffering off;

Debugging Tips

# Test config before reloading
sudo nginx -t

# Reload without downtime (graceful)
sudo nginx -s reload

# Enable debug logging temporarily (very verbose - disable after)
error_log /var/log/nginx/error.log debug;

# Check loaded config and listening ports
nginx -T | head -50
sudo ss -tlnp | grep nginx

15. Production Best Practices

# /etc/nginx/nginx.conf - production-ready base configuration
user nginx;
worker_processes auto;           # one worker per CPU core
worker_rlimit_nofile 65535;      # max open files per worker
pid /run/nginx.pid;

events {
    worker_connections 4096;     # max connections per worker
    multi_accept on;             # accept all new connections at once
    use epoll;                   # efficient event method on Linux
}

http {
    # Basic settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    server_tokens off;
    client_max_body_size 16m;

    # Logging
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$upstream_addr $upstream_response_time';
    access_log /var/log/nginx/access.log main buffer=16k flush=5s;
    error_log /var/log/nginx/error.log warn;

    # Gzip
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;
    gzip_min_length 256;
    gzip_types text/plain text/css application/json application/javascript
               text/xml application/xml image/svg+xml;

    # Rate limiting
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

    # Upstream keepalive - reuse connections to backend
    upstream backend {
        server 127.0.0.1:3000;
        keepalive 32;            # keep 32 idle connections open
    }

    # Include per-site configs
    include /etc/nginx/conf.d/*.conf;
}

Key settings: worker_processes auto uses all CPU cores. keepalive 32 in upstream reuses backend TCP connections to reduce latency. $upstream_response_time in the log format tracks backend latency for identifying slow endpoints. Buffered access logs reduce disk I/O.

Frequently Asked Questions

What is the difference between a forward proxy and a reverse proxy?

A forward proxy sits in front of clients and forwards their requests to the internet, hiding the client's identity from the destination server. Corporate proxies and VPNs are forward proxies. A reverse proxy sits in front of backend servers and forwards client requests to the appropriate server, hiding your infrastructure from clients. Nginx is primarily used as a reverse proxy: clients connect to nginx, and nginx forwards requests to your application servers. Reverse proxies provide load balancing, SSL termination, caching, and security benefits that forward proxies do not.

How do I fix a 502 Bad Gateway error in nginx?

A 502 Bad Gateway means nginx received the client request but got an invalid response (or no response) from the upstream backend. Check these in order: (1) is the backend running? Use systemctl status or docker ps; (2) is proxy_pass pointing to the correct host and port? (3) is the backend listening on the right address — 127.0.0.1 vs 0.0.0.0 matters when nginx and the app are in different containers; (4) is a firewall or SELinux blocking the connection? Check /var/log/nginx/error.log for the specific error — "Connection refused" means the backend is not running, "upstream prematurely closed connection" means it crashed mid-request.

Should I terminate SSL at nginx or at the application?

Terminate SSL at nginx in almost all cases. It simplifies certificate management to one location, offloads CPU-intensive TLS operations from application servers, allows nginx to inspect and cache HTTP content, and enables HTTP/2 without modifying your app. The traffic between nginx and your backend travels over plain HTTP on your private network. If regulatory compliance requires end-to-end encryption, configure nginx to re-encrypt traffic to the backend using proxy_pass https:// with proxy_ssl_verify on and proxy_ssl_trusted_certificate.

How does nginx load balancing work with sticky sessions?

The ip_hash directive routes all requests from the same client IP to the same backend server. This is essential when your application stores session state in server memory. Add ip_hash; inside your upstream block. The limitation is that users behind corporate NAT will all go to one server. The better long-term solution is to make your application stateless by storing sessions in Redis or a database, which allows standard round-robin load balancing and easier horizontal scaling.

How do I proxy WebSocket connections through nginx?

WebSocket requires HTTP Upgrade headers that nginx does not forward by default. Add these three directives to your location block: proxy_http_version 1.1 (WebSocket needs HTTP/1.1 for the upgrade handshake), proxy_set_header Upgrade $http_upgrade (forwards the upgrade request), and proxy_set_header Connection "upgrade" (tells nginx to switch protocols). Also set proxy_read_timeout 86400s because WebSocket connections are long-lived and nginx's default 60-second timeout will close idle connections. Use a map block to conditionally set the Connection header so normal HTTP requests are not affected.

Continue Learning

Related Resources

Nginx Configuration Guide
Deep dive into nginx directives and configuration
Docker Compose Guide
Orchestrate nginx with backend services
Docker Security Guide
Harden your containerized applications
SSH Config Guide
Secure remote access and SSH tunneling