Hyperfine — Command-Line Benchmarking Tool
Hyperfine is a command-line benchmarking tool written in Rust. Run benchmarks with statistical analysis: multiple runs, warmup, outlier detection, comparison across commands, and export results. By the author of bat and fd.
What it is
Hyperfine is a command-line benchmarking tool written in Rust. It runs shell commands multiple times, performs statistical analysis on the timing results, and reports mean, standard deviation, min, and max execution times. It supports warmup runs, parameterized benchmarks, and exporting results to CSV, JSON, or Markdown.
The target audience includes developers optimizing CLI tools, DevOps engineers comparing script performance, and anyone who needs reproducible timing measurements for shell commands.
How it saves time or tokens
Manual benchmarking with time gives a single noisy measurement. Hyperfine automates multiple runs, discards outliers, and provides confidence intervals -- turning a 5-minute manual process into a one-liner. The parameterized benchmark feature lets you compare multiple implementations in a single command.
For AI-assisted optimization workflows, hyperfine provides structured JSON output that an LLM can parse to recommend which implementation is faster, with statistical confidence.
How to use
- Install hyperfine:
# macOS
brew install hyperfine
# Ubuntu/Debian
sudo apt install hyperfine
# Cargo
cargo install hyperfine
- Benchmark a single command:
hyperfine 'sleep 0.3'
- Compare two commands:
hyperfine 'fd . /usr/lib' 'find /usr/lib'
Example
Parameterized benchmark comparing grep implementations:
# Compare ripgrep vs grep vs ag on a large codebase
hyperfine \
--warmup 3 \
--min-runs 10 \
'rg TODO src/' \
'grep -r TODO src/' \
'ag TODO src/' \
--export-json results.json
# Parameterized: test different thread counts
hyperfine \
--parameter-scan threads 1 8 \
'sort --parallel={threads} large-file.txt'
Related on TokRepo
- Automation tools -- CLI productivity and developer tools
- AI tools for testing -- Performance testing and benchmarking tools
Common pitfalls
- Not using
--warmupfor commands that benefit from filesystem caching. The first run is often significantly slower due to cold cache. - Benchmarking commands that modify state (creating files, writing databases) without cleanup. Use
--prepare 'rm -f output.txt'to reset between runs. - Running benchmarks on a loaded system. Background processes introduce variance. Close other applications or use
--min-runs 20for more stable results. - Comparing commands with different output volumes without redirecting to
/dev/null. I/O time dominates when terminal rendering is involved. - Forgetting
--shell=nonefor commands that do not need shell interpretation. This avoids shell startup overhead in measurements.
Frequently Asked Questions
Hyperfine automatically determines the number of runs based on the command's execution time. Fast commands get more runs for statistical stability. You can override this with --min-runs and --max-runs flags.
Yes. Use --export-json, --export-csv, or --export-markdown to save results. The JSON output includes all individual run times, making it suitable for custom analysis or plotting with tools like matplotlib or gnuplot.
Yes. Hyperfine runs on Windows, macOS, and Linux. On Windows, it uses cmd.exe or PowerShell as the shell. Install via Scoop, Chocolatey, or Cargo.
Use --parameter-scan for numeric ranges (e.g., thread counts) or --parameter-list for discrete values. Hyperfine runs each parameter combination and reports comparative results.
Yes. Use --export-json to capture results, then compare against baseline values in your CI script. Hyperfine returns exit code 0 on success, making it straightforward to integrate into automated testing workflows.
Citations (3)
- Hyperfine GitHub— Hyperfine is a command-line benchmarking tool written in Rust
- Hyperfine README— Supports warmup runs, parameterized benchmarks, and export formats
- Hyperfine Documentation— Statistical analysis with mean, standard deviation, and confidence intervals
Related on TokRepo
Discussion
Related Assets
doctest — The Fastest Feature-Rich C++ Testing Framework
doctest is a single-header C++ testing framework designed for minimal compile-time overhead and maximum speed.
Chai — BDD/TDD Assertion Library for Node.js
Chai is a flexible assertion library for Node.js and browsers that supports expect, should, and assert styles.
Supertest — HTTP Assertion Library for Node.js APIs
Supertest provides a high-level API for testing HTTP servers in Node.js with fluent assertion chaining.