Nginx Load Balancing

devops
Nginx
scaffolding
strict_senior
Remix

Configure Nginx as a load balancer with health checks, session persistence, and failover.

12/8/2025

Prompt

Nginx Load Balancer Configuration

Configure Nginx as a load balancer for [Application] to distribute traffic across multiple backend servers.

Requirements

1. Backend Servers

Set up load balancing for:

  • [Number] backend servers
  • [Server Type] (e.g., Node.js API servers)
  • Running on [Ports/IPs]

2. Load Balancing Method

Choose strategy:

  • Round Robin - Distribute evenly (default)
  • Least Connections - Send to server with fewest active connections
  • IP Hash - Session persistence based on client IP
  • Weighted - Distribute based on server capacity

3. Health Monitoring

Implement:

  • Passive health checks (max_fails, fail_timeout)
  • Active health checks (if using Nginx Plus)
  • Backup servers for failover
  • Automatic server removal on failure

4. Session Persistence

Configure if needed:

  • Sticky sessions with cookies
  • IP hash for consistent routing
  • Shared session storage

5. Performance Optimization

Include:

  • Connection keep-alive
  • Proper timeouts
  • Buffer sizes
  • Upstream connection pooling

Implementation Pattern

# Define upstream backend servers
upstream [backend_name] {
    # Load balancing method (choose one)
    # round-robin (default - no directive needed)
    # least_conn;          # Least connections
    # ip_hash;             # IP-based routing
    # hash $request_uri;   # URI-based routing
    
    # Server definitions
    server [server1_ip]:[port] weight=[weight] max_fails=[num] fail_timeout=[time]s;
    server [server2_ip]:[port] weight=[weight] max_fails=[num] fail_timeout=[time]s;
    server [server3_ip]:[port] weight=[weight] max_fails=[num] fail_timeout=[time]s;
    server [backup_server]:[port] backup;  # Only used when others fail
    
    # Connection settings
    keepalive [connections];  # Keep-alive connections pool
}

# Server block
server {
    listen 80;
    server_name [domain.com];
    
    # Redirect HTTP to HTTPS (optional)
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name [domain.com];
    
    # SSL configuration
    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    
    # Proxy settings
    location / {
        proxy_pass http://[backend_name];
        
        # Headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Timeouts
        proxy_connect_timeout [seconds];
        proxy_send_timeout [seconds];
        proxy_read_timeout [seconds];
        
        # Buffering
        proxy_buffering on;
        proxy_buffer_size [size];
        proxy_buffers [number] [size];
        
        # WebSocket support (if needed)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    
    # Health check endpoint  (optional)
    location /health {
        access_log off;
        return 200 "healthy";
        add_header Content-Type text/plain;
    }
}

Load Balancing Methods

Round Robin (Default)

upstream backend {
    server server1:[port];
    server server2:[port];
    server server3:[port];
}

Least Connections

upstream backend {
    least_conn;
    server server1:[port];
    server server2:[port];
}

IP Hash (Session Persistence)

upstream backend {
    ip_hash;
    server server1:[port];
    server server2:[port];
}

Weighted Distribution

upstream backend {
    server server1:[port] weight=3;  # 75% of traffic
    server server2:[port] weight=1;  # 25% of traffic
}

Health Checks

upstream backend {
    server server1:[port] max_fails=3 fail_timeout=30s;
    server server2:[port] max_fails=3 fail_timeout=30s;
    server server3:[port] backup;  # Backup server
}

Best Practices

  • Use least_conn for long-running requests
  • Implement health checks on all servers
  • Configure appropriate timeout values
  • Enable keep-alive connections
  • Use backup servers for high availability
  • Monitor upstream server status
  • Implement SSL termination at load balancer
  • Use proper buffer sizes for your workload
  • Log upstream response times

Tags

nginx
load-balancing
high-availability
devops

Tested Models

gpt-4
claude-3-opus

Comments (0)

Sign in to leave a comment

Sign In