Introduction
autocannon is an HTTP/1.1 benchmarking tool written entirely in Node.js that can generate substantial load while providing detailed latency and throughput statistics. Its standout feature is HTTP pipelining support, which sends multiple requests on a single connection before waiting for responses, dramatically increasing throughput to servers that support it. The tool works both as a CLI binary and as a programmatic Node.js library, making it easy to embed benchmarks directly in test suites.
What autocannon Does
- Sends HTTP/1.1 requests with configurable concurrency, duration, and pipelining depth
- Reports latency percentiles (p50, p75, p90, p97.5, p99, p99.9), throughput, and error counts
- Supports HTTP pipelining to push multiple requests per connection round-trip
- Exposes a programmatic API for embedding benchmarks in Node.js scripts and CI jobs
- Generates live progress output with requests per second and latency updates
Architecture Overview
autocannon maintains a pool of persistent TCP connections, each managed by a Client object that tracks in-flight requests and measures response times. When pipelining is enabled, each client sends multiple requests without waiting for a response, relying on HTTP/1.1 ordered responses to match them. Timing data is collected in an HdrHistogram for accurate percentile computation with minimal memory overhead. The coordinator runs a timer loop that samples throughput every second and finalizes results when the duration expires.
Self-Hosting & Configuration
- Install globally (
npm install -g autocannon) or as a project dependency - Set
-cfor connections,-dfor duration in seconds,-pfor pipeline depth - Use
-m POSTwith-b 'body'or-i body.jsonfor non-GET requests - Pass custom headers via
-H "Authorization: Bearer token" - Output JSON with
-jfor automated threshold checking in CI
Key Features
- HTTP pipelining support for testing servers that handle queued requests efficiently
- Dual-mode: CLI tool for quick tests, npm library for programmatic benchmarks
- HdrHistogram-based latency tracking for accurate percentile measurements
- Live progress bar showing real-time requests per second and latency
- Pure Node.js — no native compilation step, runs wherever Node runs
Comparison with Similar Tools
- wrk — C-based with higher peak RPS; autocannon offers pipelining and a programmatic JS API
- hey — simpler Go binary; autocannon adds pipelining and embeddable library mode
- oha — Rust TUI for visual feedback; autocannon is ideal for Node.js-native CI workflows
- Vegeta — constant-rate Go tool; autocannon uses open-model concurrency with optional pipelining
- k6 — scriptable in JavaScript with cloud features; autocannon is lighter for pure HTTP throughput tests
FAQ
Q: What is HTTP pipelining and why does it matter? A: Pipelining sends multiple requests on one connection without waiting for each response. It reduces round-trip overhead and can reveal server bottlenecks that standard serial requests miss.
Q: Can I use autocannon as a library in my tests?
A: Yes. Require autocannon and call autocannon(opts, callback) to run benchmarks programmatically and assert on the results.
Q: Does autocannon support HTTPS?
A: Yes. Pass an https:// URL and autocannon handles TLS automatically. Use --no-strictSSL for self-signed certificates.
Q: How do I compare results between runs?
A: Use autocannon-compare to diff two JSON result files and see if latency or throughput regressed.