# oha — HTTP Load Generator with Real-Time TUI Dashboard > HTTP load generator written in Rust with a beautiful real-time terminal UI for visualizing request latency and throughput. ## Install Save in your project root: # oha — HTTP Load Generator with Real-Time TUI Dashboard ## Quick Use ```bash # Install via cargo cargo install oha # Install via Homebrew brew install oha # Basic load test: 200 requests, 50 concurrent oha -n 200 -c 50 http://localhost:8080/ # Run for 30 seconds with real-time TUI oha -z 30s -c 100 http://localhost:8080/ ``` ## Introduction oha is an HTTP load generator written in Rust that provides a real-time terminal user interface showing live latency histograms, status code distributions, and throughput metrics as the test runs. Inspired by rakyll/hey, it adds visual feedback so you can observe server behavior under load without waiting for a final summary. The Rust implementation ensures low resource consumption on the client side even at high concurrency levels. ## What oha Does - Sends configurable HTTP load with fixed request count or duration-based runs - Displays a real-time TUI with live latency histogram, percentiles, and error rates - Supports HTTP/1.1 and HTTP/2 with custom headers, methods, and request bodies - Outputs final results as text, JSON, or compatible formats for CI integration - Handles TLS connections with configurable certificate options ## Architecture Overview oha uses Tokio as its async runtime to manage thousands of concurrent connections efficiently. Each connection runs as an independent async task sending requests and recording response times. A dedicated TUI thread powered by ratatui reads timing data from a shared buffer and renders live charts at a fixed refresh rate. After the run completes, oha aggregates all samples and prints a statistical summary including percentiles, mean, and standard deviation. ## Self-Hosting & Configuration - Install via `cargo install oha` or `brew install oha` on macOS - Pre-built binaries available on GitHub Releases for Linux, macOS, and Windows - Use `-n` for total request count or `-z` for duration-based runs (e.g., `-z 60s`) - Set `-c` for concurrent connections and `-q` for per-second rate limiting - Disable the TUI with `--no-tui` for headless CI environments and pipe JSON output with `-j` ## Key Features - Real-time terminal dashboard with live latency histogram and throughput graph - Written in Rust for minimal memory footprint and maximum client-side efficiency - Supports both HTTP/1.1 and HTTP/2 protocols out of the box - JSON output mode for programmatic consumption in CI/CD pipelines - Cross-platform: runs natively on Linux, macOS, and Windows ## Comparison with Similar Tools - **hey** — simpler text-only output with no live dashboard; oha adds real-time visualization - **wrk** — higher raw throughput via C/epoll but no TUI and Unix-only; oha is cross-platform - **bombardier** — Go-based with a progress bar; oha offers a richer TUI with histograms - **k6** — scriptable in JavaScript with cloud reporting; oha is a lightweight single-purpose tool - **Vegeta** — constant-rate attack model with plot output; oha provides live interactive feedback ## FAQ **Q: Can I disable the TUI for CI usage?** A: Yes. Pass `--no-tui` to get a simple text summary, or `-j` for JSON output suitable for automated parsing. **Q: Does oha support HTTP/2?** A: Yes. Use `--http2` to force HTTP/2 connections, or let oha negotiate via ALPN automatically. **Q: How does oha compare to wrk in throughput?** A: wrk typically achieves higher raw RPS due to its C/epoll design. oha trades some peak throughput for a richer user experience and cross-platform support. **Q: Can I send POST requests with a body?** A: Yes. Use `-m POST -d '{"key":"val"}'` or `-D @body.json` to load the body from a file. ## Sources - https://github.com/hatoo/oha - https://github.com/hatoo/oha/blob/master/README.md --- Source: https://tokrepo.com/en/workflows/54453dda-41af-11f1-9bc6-00163e2b0d79 Author: AI Open Source