ScriptsApr 12, 2026·2 min read

Hyperfine — Command-Line Benchmarking Tool

Hyperfine is a command-line benchmarking tool written in Rust. Run benchmarks with statistical analysis: multiple runs, warmup, outlier detection, comparison across commands, and export results. By the author of bat and fd.

TL;DR
Hyperfine benchmarks shell commands with warmup runs, statistical analysis, and result export.
§01

What it is

Hyperfine is a command-line benchmarking tool written in Rust. It runs shell commands multiple times, performs statistical analysis on the timing results, and reports mean, standard deviation, min, and max execution times. It supports warmup runs, parameterized benchmarks, and exporting results to CSV, JSON, or Markdown.

The target audience includes developers optimizing CLI tools, DevOps engineers comparing script performance, and anyone who needs reproducible timing measurements for shell commands.

§02

How it saves time or tokens

Manual benchmarking with time gives a single noisy measurement. Hyperfine automates multiple runs, discards outliers, and provides confidence intervals -- turning a 5-minute manual process into a one-liner. The parameterized benchmark feature lets you compare multiple implementations in a single command.

For AI-assisted optimization workflows, hyperfine provides structured JSON output that an LLM can parse to recommend which implementation is faster, with statistical confidence.

§03

How to use

  1. Install hyperfine:
# macOS
brew install hyperfine

# Ubuntu/Debian
sudo apt install hyperfine

# Cargo
cargo install hyperfine
  1. Benchmark a single command:
hyperfine 'sleep 0.3'
  1. Compare two commands:
hyperfine 'fd . /usr/lib' 'find /usr/lib'
§04

Example

Parameterized benchmark comparing grep implementations:

# Compare ripgrep vs grep vs ag on a large codebase
hyperfine \
  --warmup 3 \
  --min-runs 10 \
  'rg TODO src/' \
  'grep -r TODO src/' \
  'ag TODO src/' \
  --export-json results.json

# Parameterized: test different thread counts
hyperfine \
  --parameter-scan threads 1 8 \
  'sort --parallel={threads} large-file.txt'
§05

Related on TokRepo

§06

Common pitfalls

  • Not using --warmup for commands that benefit from filesystem caching. The first run is often significantly slower due to cold cache.
  • Benchmarking commands that modify state (creating files, writing databases) without cleanup. Use --prepare 'rm -f output.txt' to reset between runs.
  • Running benchmarks on a loaded system. Background processes introduce variance. Close other applications or use --min-runs 20 for more stable results.
  • Comparing commands with different output volumes without redirecting to /dev/null. I/O time dominates when terminal rendering is involved.
  • Forgetting --shell=none for commands that do not need shell interpretation. This avoids shell startup overhead in measurements.

Frequently Asked Questions

How many runs does hyperfine perform by default?+

Hyperfine automatically determines the number of runs based on the command's execution time. Fast commands get more runs for statistical stability. You can override this with --min-runs and --max-runs flags.

Can hyperfine export results for plotting?+

Yes. Use --export-json, --export-csv, or --export-markdown to save results. The JSON output includes all individual run times, making it suitable for custom analysis or plotting with tools like matplotlib or gnuplot.

Does hyperfine work on Windows?+

Yes. Hyperfine runs on Windows, macOS, and Linux. On Windows, it uses cmd.exe or PowerShell as the shell. Install via Scoop, Chocolatey, or Cargo.

How do I benchmark commands with different parameters?+

Use --parameter-scan for numeric ranges (e.g., thread counts) or --parameter-list for discrete values. Hyperfine runs each parameter combination and reports comparative results.

Can I use hyperfine in CI/CD pipelines?+

Yes. Use --export-json to capture results, then compare against baseline values in your CI script. Hyperfine returns exit code 0 on success, making it straightforward to integrate into automated testing workflows.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets