Nginx Load Balancing Complete Guide: Upstream, Health Checks & SSL
Load balancing is the practice of distributing incoming network traffic across multiple backend servers to ensure no single server bears too much load. As your application scales beyond what a single server can handle, load balancing becomes essential for maintaining performance, availability, and reliability. Nginx is one of the most popular choices for this role, combining a high-performance HTTP server with powerful reverse proxy and load balancing capabilities in a single lightweight package.
This guide covers everything you need to deploy Nginx as a production load balancer: upstream configuration, algorithms, health checks, SSL termination, sticky sessions, WebSocket support, Docker/Kubernetes integration, high availability, and performance tuning.
1. Why Load Balancing Matters
Without load balancing, a single application server handles all requests. When traffic spikes or the server fails, your application goes down. Load balancing solves three problems: scalability (distribute requests across servers to handle more users), high availability (if one backend crashes, others serve traffic with zero downtime), and performance (servers share the workload, reducing response times and preventing bottlenecks).
2. Nginx as a Load Balancer vs HAProxy
Both Nginx and HAProxy are excellent load balancers, but they serve different niches:
Nginx is a web server first and load balancer second. It excels when you need to serve static files, terminate SSL, cache responses, and balance load in one process. HAProxy is a dedicated load balancer with more granular health checking, connection draining, real-time statistics, and ACL-based routing out of the box.
Choose Nginx when you already use it as a web server, need SSL termination plus load balancing, or want a single tool for static files and proxying. Choose HAProxy when you need advanced health checking, Layer 4 TCP proxying, or detailed built-in statistics at extremely high connection counts.
3. The Upstream Directive: Defining Backend Servers
The upstream block defines a group of backend servers that Nginx distributes requests across. Place it inside the http context, outside any server block:
http {
upstream backend_app {
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://backend_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
Each server line inside the upstream block specifies a backend address and port. Nginx resolves the address at startup (or reload). You can also use Unix sockets:
upstream backend_app {
server unix:/var/run/app1.sock;
server unix:/var/run/app2.sock;
server 10.0.1.10:8080 backup; # only used if socket servers are down
}
The backup parameter marks a server as a standby that only receives traffic when all primary servers are unavailable. The down parameter marks a server as permanently unavailable — useful during maintenance without removing it from the configuration.
4. Load Balancing Algorithms
Round Robin (Default)
Requests are distributed sequentially across servers. Server 1 gets the first request, server 2 gets the second, and so on. This is the default when no algorithm is specified and works well when all backends have similar capacity.
upstream backend {
# Round robin is the default - no directive needed
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
Least Connections
Routes each new request to the server with the fewest active connections. This is the best choice when request processing times vary significantly, such as APIs where some endpoints are fast and others involve heavy computation or database queries.
upstream backend {
least_conn;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
IP Hash
Uses the client IP address to determine which server receives the request. The same client always reaches the same backend, providing a form of session persistence. Useful when your application stores session state on individual servers rather than in a shared store.
upstream backend {
ip_hash;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
Caveat: If many users share a public IP (corporate NAT, mobile carrier), traffic concentrates on one backend.
Generic Hash
Lets you hash any variable — a request URI, a cookie, a header, or a combination. Adding the consistent parameter uses ketama consistent hashing, which minimizes redistribution when servers are added or removed.
upstream backend {
hash $request_uri consistent;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
# Hash by session cookie for cookie-based sticky sessions
upstream backend_session {
hash $cookie_SESSIONID consistent;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
}
Random
Selects a random server for each request. With the two parameter, Nginx picks two servers at random and chooses the one with fewer active connections (power of two choices). This provides a good balance of simplicity and fairness.
upstream backend {
random two least_conn;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
server 10.0.1.13:8080;
}
5. Weighted Load Balancing
When backend servers have different capacities, assign weights so more powerful servers receive proportionally more traffic. The default weight is 1. A server with weight 3 receives three times more requests than a server with weight 1.
upstream backend {
server 10.0.1.10:8080 weight=5; # 8 CPU cores, 32 GB RAM
server 10.0.1.11:8080 weight=3; # 4 CPU cores, 16 GB RAM
server 10.0.1.12:8080 weight=1; # 2 CPU cores, 8 GB RAM
}
Weights work with all algorithms. Combined with least_conn, Nginx considers both the weight and the current connection count, producing a weighted least-connections strategy that is ideal for heterogeneous server pools:
upstream backend {
least_conn;
server 10.0.1.10:8080 weight=5;
server 10.0.1.11:8080 weight=3;
server 10.0.1.12:8080 weight=1;
}
6. Health Checks: Passive and Active
Passive Health Checks (Open Source)
Nginx monitors responses from backends and temporarily marks a server as unavailable when failures exceed a threshold. This is automatic in open-source Nginx:
upstream backend {
server 10.0.1.10:8080 max_fails=3 fail_timeout=30s;
server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;
server 10.0.1.12:8080 max_fails=3 fail_timeout=30s;
}
max_fails=3 means 3 consecutive failures mark the server as unhealthy. fail_timeout=30s defines both the failure counting window and how long the server stays unavailable. A failure is defined by proxy_next_upstream:
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_timeout 10s;
proxy_next_upstream_tries 3;
}
This tells Nginx to retry on connection errors, timeouts, or 5xx responses, with a maximum of 3 attempts within 10 seconds.
Active Health Checks (Nginx Plus)
Nginx Plus supports active probes that periodically send requests to backends regardless of traffic. For open-source Nginx, you can achieve similar results with an external health checker:
# Nginx Plus active health check syntax
upstream backend {
zone backend_zone 64k;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
}
server {
location / {
proxy_pass http://backend;
health_check interval=5s fails=3 passes=2 uri=/health;
}
}
For open-source Nginx, write a simple cron script or use a tool like Consul or custom health check daemon that updates the Nginx configuration and triggers a reload when backend health changes.
7. SSL Termination and Pass-Through
SSL Termination
The load balancer handles TLS encryption/decryption and forwards plain HTTP to backends. This is the most common approach because it offloads CPU-intensive cryptography from application servers:
upstream backend {
server 10.0.1.10:8080;
server 10.0.1.11:8080;
}
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name app.example.com;
return 301 https://$host$request_uri;
}
SSL Pass-Through
The load balancer forwards encrypted traffic directly to backends without decrypting it. Use the stream module for Layer 4 TCP proxying. This is necessary when backends must handle their own TLS, such as mutual TLS (mTLS) between services:
stream {
upstream backend_tls {
server 10.0.1.10:443;
server 10.0.1.11:443;
}
server {
listen 443;
proxy_pass backend_tls;
proxy_connect_timeout 5s;
}
}
With SSL pass-through, Nginx cannot inspect HTTP headers, modify requests, or route based on URL paths. It operates purely at the TCP level.
8. Sticky Sessions
Sticky sessions ensure a client consistently reaches the same backend, important when session data is stored locally rather than in a shared store like Redis.
IP Hash (Simplest)
upstream backend {
ip_hash;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
}
Cookie-Based Hash
upstream backend {
hash $cookie_JSESSIONID consistent;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
}
# Or use a custom route cookie set by the application
map $cookie_route $backend_route {
default backend;
server1 backend_s1;
server2 backend_s2;
}
Nginx Plus Sticky Cookie
# Nginx Plus only
upstream backend {
server 10.0.1.10:8080;
server 10.0.1.11:8080;
sticky cookie srv_id expires=1h domain=.example.com path=/;
}
Best practice: Avoid sticky sessions when possible. Store session data in Redis or a database so any backend can serve any request, enabling true horizontal scaling.
9. Rate Limiting with Load Balancing
Apply rate limiting at the load balancer to protect all backends uniformly. The limit_req module uses a shared memory zone to track request rates across all worker processes:
http {
# Define rate limit zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;
upstream api_backend {
least_conn;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
}
server {
listen 443 ssl http2;
server_name api.example.com;
# General API rate limit: 10 requests/sec with burst of 20
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Strict login rate limit: 1 request/sec with burst of 5
location /api/auth/login {
limit_req zone=login_limit burst=5 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
The nodelay parameter processes burst requests immediately rather than queuing them. Use limit_req_status 429 to return a proper "Too Many Requests" status instead of the default 503.
10. WebSocket Proxying with Load Balancing
WebSocket connections require HTTP/1.1 Upgrade headers to be passed through. Combine this with sticky sessions since WebSockets are long-lived connections that must persist on a single backend:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket_backend {
ip_hash; # sticky sessions for WebSockets
server 10.0.1.10:3000;
server 10.0.1.11:3000;
}
server {
listen 443 ssl http2;
server_name ws.example.com;
ssl_certificate /etc/letsencrypt/live/ws.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ws.example.com/privkey.pem;
location /ws/ {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 3600s; # 1 hour idle timeout
proxy_send_timeout 3600s;
}
# Regular HTTP traffic uses least_conn
location / {
proxy_pass http://websocket_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
The proxy_read_timeout is critical: Nginx closes idle connections after 60 seconds by default. For WebSockets, set it to an hour or more. See our WebSocket Complete Guide for more on WebSocket architecture.
11. Docker and Kubernetes Integration
Docker Compose with Nginx Load Balancer
Use Docker Compose to run Nginx in front of multiple application containers. The deploy.replicas setting creates multiple instances that Nginx balances across:
# docker-compose.yml
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on:
- app
app:
image: myapp:latest
deploy:
replicas: 3
expose:
- "8080"
# nginx.conf for Docker
upstream app_servers {
# Docker DNS resolves the service name to all container IPs
server app:8080;
}
server {
listen 80;
resolver 127.0.0.11 valid=10s; # Docker internal DNS
location / {
proxy_pass http://app_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Important: Docker DNS returns all container IPs for a service name, but Nginx resolves DNS at startup only. Use the resolver directive with a short TTL so Nginx picks up container changes. Alternatively, use a variable (set $upstream app:8080; proxy_pass http://$upstream;) to force runtime resolution.
Kubernetes Ingress
In Kubernetes, the nginx-ingress-controller acts as the load balancer. It watches Ingress resources and configures Nginx automatically:
# Kubernetes Ingress resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
The ingress controller handles backend discovery through Kubernetes Services. Pods can scale up and down and the controller updates the Nginx configuration automatically.
12. Monitoring and Logging Load Balanced Traffic
Effective load balancer monitoring requires tracking which backend served each request and how long it took:
http {
# Custom log format with upstream details
log_format lb_access '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream=$upstream_addr '
'upstream_status=$upstream_status '
'upstream_time=$upstream_response_time '
'request_time=$request_time';
server {
access_log /var/log/nginx/lb-access.log lb_access;
location / {
proxy_pass http://backend;
}
}
}
Key variables for load balancer monitoring:
$upstream_addr— the IP and port of the backend that handled the request$upstream_status— HTTP status code returned by the backend$upstream_response_time— time in seconds (with millisecond resolution) the backend took to respond$upstream_connect_time— time to establish the connection to the backend$request_time— total time from when Nginx received the first client byte to when it sent the last byte back
Stub Status for Basic Metrics
server {
listen 8080;
allow 10.0.0.0/8;
deny all;
location /nginx_status {
stub_status on;
}
}
This exposes active connections, accepts, handled connections, and total requests. Pair it with Prometheus nginx-exporter or a Telegraf plugin to feed metrics into Grafana dashboards.
13. High Availability with Keepalived
A single Nginx load balancer is a single point of failure. Use keepalived to create an active-passive pair that shares a virtual IP (VIP). If the primary Nginx fails, the secondary takes over the VIP within seconds:
# /etc/keepalived/keepalived.conf
# PRIMARY: state MASTER, priority 100
# SECONDARY: state BACKUP, priority 90
vrrp_script check_nginx {
script "/usr/bin/curl -sf http://localhost/health"
interval 2
weight -20
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER # use BACKUP on the secondary node
interface eth0
virtual_router_id 51
priority 100 # use 90 on the secondary node
advert_int 1
authentication {
auth_type PASS
auth_pass mySecretPass
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
check_nginx
}
}
The vrrp_script checks Nginx health every 2 seconds. If 3 checks fail, the priority drops by 20, the secondary wins the VRRP election and assumes the VIP. Once the primary recovers (2 consecutive passes), it reclaims the VIP. DNS points to the virtual IP so clients always connect to the active instance.
14. Performance Tuning: Connections, Keepalive, and Buffers
worker_processes auto; # one worker per CPU core
worker_rlimit_nofile 65535; # max open files per worker
events {
worker_connections 4096; # max connections per worker
multi_accept on; # accept multiple connections at once
use epoll; # Linux: most efficient event model
}
http {
# TCP optimization
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Client keepalive
keepalive_timeout 65;
keepalive_requests 1000;
# Upstream keepalive - CRITICAL for load balancer performance
upstream backend {
least_conn;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
keepalive 64; # persistent connection pool per worker
keepalive_timeout 60s;
keepalive_requests 1000;
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection ""; # required for upstream keepalive
}
}
# Buffer tuning for proxied responses
proxy_buffer_size 8k;
proxy_buffers 16 8k;
proxy_busy_buffers_size 16k;
# Timeouts
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
}
Upstream keepalive is the single most impactful performance setting for a load balancer. Without it, Nginx opens a new TCP connection to the backend for every request, adding latency and consuming file descriptors. With keepalive 64, each worker process maintains a pool of up to 64 idle connections to the upstream, reusing them for subsequent requests. This alone can improve throughput by 20-40%.
Key requirements for upstream keepalive: Set proxy_http_version 1.1 (HTTP/1.0 does not support persistent connections) and proxy_set_header Connection "" (override the default "close" value). For buffer sizing, start with 8k and increase if backends send large headers. Monitor the error log for "upstream response is buffered to a temporary file" warnings.
15. Real-World Configuration Examples
E-Commerce Application
upstream web_app {
least_conn;
server 10.0.1.10:3000 weight=3 max_fails=3 fail_timeout=30s;
server 10.0.1.11:3000 weight=3 max_fails=3 fail_timeout=30s;
server 10.0.1.12:3000 weight=1 max_fails=3 fail_timeout=30s; # canary
keepalive 32;
}
upstream api_gateway {
least_conn;
server 10.0.2.10:8080 max_fails=2 fail_timeout=15s;
server 10.0.2.11:8080 max_fails=2 fail_timeout=15s;
keepalive 64;
}
upstream static_cdn {
server 10.0.3.10:80;
server 10.0.3.11:80;
keepalive 16;
}
server {
listen 443 ssl http2;
server_name shop.example.com;
ssl_certificate /etc/letsencrypt/live/shop.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/shop.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Static assets - cached aggressively
location /static/ {
proxy_pass http://static_cdn;
proxy_cache_valid 200 1d;
expires 30d;
add_header Cache-Control "public, immutable";
}
# API routes
location /api/ {
proxy_pass http://api_gateway;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Web application
location / {
proxy_pass http://web_app;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Microservices API Gateway
upstream auth_service { server 10.0.1.10:4000; server 10.0.1.11:4000; keepalive 16; }
upstream user_service { server 10.0.2.10:4001; server 10.0.2.11:4001; keepalive 16; }
upstream order_service { server 10.0.3.10:4002; server 10.0.3.11:4002; keepalive 32; }
upstream product_service { server 10.0.4.10:4003; server 10.0.4.11:4003; keepalive 32; }
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
location /auth/ { proxy_pass http://auth_service; include proxy_params; }
location /users/ { proxy_pass http://user_service; include proxy_params; }
location /orders/ { proxy_pass http://order_service; include proxy_params; }
location /products/ { proxy_pass http://product_service; include proxy_params; }
location = /health { access_log off; return 200 "OK\n"; }
}
Blue-Green Deployment
# Toggle between blue and green by swapping the servers and reloading
upstream active_backend {
server 10.0.1.10:8080; # blue (swap to 10.0.2.x for green)
server 10.0.1.11:8080; # blue
keepalive 32;
}
server {
listen 443 ssl http2;
server_name app.example.com;
location / {
proxy_pass http://active_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
}
}
Frequently Asked Questions
What is the difference between Nginx load balancing and HAProxy?
Nginx is a full web server that also functions as a load balancer, reverse proxy, and SSL terminator. HAProxy is a dedicated load balancer and proxy with more advanced health checking, connection draining, and real-time statistics out of the box. Nginx is better when you need to serve static content alongside load balancing or want a single tool for both web serving and proxying. HAProxy is often preferred for pure Layer 4/Layer 7 load balancing at very high connection counts, as it was purpose-built for that role. For most web applications, Nginx is sufficient and simpler to manage since it consolidates roles.
How do I configure sticky sessions in Nginx without Nginx Plus?
In open-source Nginx, use the ip_hash directive in your upstream block to route requests from the same client IP to the same backend server. Alternatively, use the hash directive with a session cookie or URL parameter, such as hash $cookie_sessionid consistent. For more control, you can set a cookie via the map directive and use its value in a hash. The third-party nginx-sticky-module-ng provides cookie-based sticky sessions similar to Nginx Plus. Note that ip_hash can cause uneven distribution when many users share a public IP, such as behind corporate NATs.
Why am I getting 502 Bad Gateway errors with Nginx load balancing?
A 502 error means Nginx received an invalid response from an upstream server. Check that all backend servers in the upstream block are running and reachable. Verify the correct ports are specified. Increase proxy_connect_timeout, proxy_read_timeout, and proxy_send_timeout if backends are slow. Check the Nginx error log at /var/log/nginx/error.log for specific messages. Ensure SELinux is not blocking connections with setsebool -P httpd_can_network_connect 1 on RHEL-based systems. If the backend closes the connection before Nginx reads the full response, increase proxy_buffer_size.
Can Nginx load balance WebSocket connections?
Yes. Use the map directive to set the Connection header based on the Upgrade header, then configure proxy_http_version 1.1, proxy_set_header Upgrade $http_upgrade, and proxy_set_header Connection $connection_upgrade in your location block. For load balancing WebSockets across multiple backends, use ip_hash or hash $remote_addr consistent in the upstream block so that a client maintains a persistent connection to the same backend. Increase proxy_read_timeout and proxy_send_timeout to avoid Nginx closing idle WebSocket connections prematurely.
How many backend servers can Nginx handle in an upstream block?
Nginx can handle hundreds of backend servers in a single upstream block without performance issues. The limit is practical rather than technical: each backend adds minimal overhead. In production, clusters of 10-50 servers per upstream are common. For very large deployments with dynamic backends, use the resolver directive with DNS-based service discovery so Nginx re-resolves server addresses periodically. Nginx Plus adds an API for dynamically adding and removing servers without reload, which is useful for auto-scaling environments.
Conclusion
Nginx load balancing transforms a single-server setup into a scalable, fault-tolerant architecture. Start with a simple round-robin upstream, add passive health checks with max_fails and fail_timeout, then tune with upstream keepalive and the right algorithm for your workload. SSL termination at the load balancer simplifies certificate management and offloads cryptographic overhead from application servers.
For production, pair Nginx with keepalived for high availability, configure logging with upstream variables, and use rate limiting to protect backends. Whether you run bare-metal servers, Docker containers, or Kubernetes pods, Nginx adapts and scales with your traffic.
Learn More
- Nginx Configuration: Complete Guide — master server blocks, location directives, SSL/TLS, and security headers
- Nginx Reverse Proxy Guide — proxying fundamentals, headers, and caching
- Docker Compose: The Complete Guide — multi-container orchestration with Nginx load balancing
- Kubernetes Complete Guide — container orchestration with nginx-ingress-controller
- WebSocket Complete Guide — real-time communication architecture and scaling