ConfigsApr 13, 2026·3 min read

Nginx — High-Performance Web Server and Reverse Proxy

Nginx is the most popular web server in the world, powering over 30% of all websites. It excels as a reverse proxy, load balancer, HTTP cache, and TLS terminator — handling millions of concurrent connections with minimal memory usage.

AI
AI Open Source · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

# Install Nginx
# macOS
brew install nginx

# Linux
sudo apt install nginx
sudo systemctl start nginx

# Docker
docker run -d -p 80:80 nginx:latest

# Test configuration
nginx -t

# Reload without downtime
sudo nginx -s reload

Introduction

Nginx ("engine-x") is the most widely deployed web server and reverse proxy on the internet. Created by Igor Sysoev in 2004 to solve the C10K problem (handling 10,000 concurrent connections), its event-driven, asynchronous architecture makes it dramatically more efficient than traditional thread-per-connection servers like Apache.

With over 30,000 GitHub stars and powering 34% of all websites (including Netflix, Airbnb, and WordPress.com), Nginx is the cornerstone of modern web infrastructure. It serves as a web server, reverse proxy, load balancer, TLS terminator, and HTTP cache — often all at once.

What Nginx Does

Nginx handles incoming HTTP/HTTPS traffic and routes it efficiently. As a reverse proxy, it sits in front of application servers (Node.js, Python, Go, etc.), handling TLS termination, load balancing, caching, and compression. Its event-driven architecture lets a single worker process handle thousands of connections simultaneously.

Architecture Overview

[Internet Traffic]
        |
   [Nginx (event-driven)]
   Master process + worker processes
   Each worker handles 1000s of connections
        |
+-------+-------+-------+
|       |       |       |
[Static  [Reverse [Load     [SSL/TLS
Files]   Proxy]   Balancer] Termination]
HTML,CSS upstream  round-robin Let's Encrypt
JS,images servers   least_conn  HTTPS offload
          |       ip_hash
   +------+------+
   |      |      |
[App 1] [App 2] [App 3]
Node.js  Python  Go
:3000    :8000   :8080

Self-Hosting & Configuration

# /etc/nginx/nginx.conf
worker_processes auto;
events {
    worker_connections 1024;
}

http {
    # Reverse proxy with load balancing
    upstream backend {
        least_conn;
        server 127.0.0.1:3000;
        server 127.0.0.1:3001;
        server 127.0.0.1:3002;
    }

    server {
        listen 80;
        server_name example.com;
        return 301 https://$host$request_uri;
    }

    server {
        listen 443 ssl http2;
        server_name example.com;

        ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

        # Static files
        location /static/ {
            root /var/www;
            expires 30d;
            add_header Cache-Control "public, immutable";
        }

        # Proxy to backend
        location / {
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        # WebSocket support
        location /ws {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }

        # Gzip compression
        gzip on;
        gzip_types text/plain application/json application/javascript text/css;
    }
}

Key Features

  • Reverse Proxy — route requests to upstream application servers
  • Load Balancing — round-robin, least connections, IP hash, and weighted
  • TLS Termination — handle HTTPS with automatic certificate management
  • Static File Serving — blazing-fast static content delivery
  • HTTP Caching — cache upstream responses for faster delivery
  • Gzip/Brotli — transparent response compression
  • Rate Limiting — protect backends from traffic spikes
  • WebSocket Proxy — full WebSocket pass-through support

Comparison with Similar Tools

Feature Nginx Apache Caddy Traefik HAProxy
Architecture Event-driven Process/Thread Event-driven Event-driven Event-driven
Auto HTTPS No (Certbot) No (Certbot) Yes (built-in) Yes (built-in) No
Config Style Declarative Declarative Simple (Caddyfile) Labels/YAML Declarative
Docker Native Manual Manual Good Excellent Good
Performance Excellent Good Very Good Very Good Excellent
Market Share 34% 29% Growing Growing Niche
Learning Curve Moderate Moderate Low Low High

FAQ

Q: Nginx vs Apache — which should I use? A: Nginx for most modern deployments — it uses less memory and handles more concurrent connections. Apache only if you need .htaccess per-directory config or specific Apache modules.

Q: Nginx vs Caddy — should I switch? A: Caddy offers automatic HTTPS and simpler configuration. Nginx offers more performance tuning, wider community support, and more battle-tested production deployments. Both are excellent choices.

Q: How do I set up HTTPS with Nginx? A: Install Certbot: "sudo apt install certbot python3-certbot-nginx", then run "sudo certbot --nginx". Certbot automatically configures Nginx with free certificates from the authority and sets up auto-renewal.

Q: How many connections can Nginx handle? A: A single Nginx instance can handle 10,000+ concurrent connections with minimal memory. With proper tuning (worker_connections, keepalive), production deployments handle 100,000+ concurrent connections.

Sources

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets