ConfigsApr 27, 2026·3 min read

Google Benchmark — Microbenchmark Library for C++

Google Benchmark is a library for measuring and reporting the performance of C++ code with statistical rigor.

Introduction

Google Benchmark is a C++ microbenchmarking library that measures function execution time with statistical accuracy. It automatically determines iteration counts, reports mean and standard deviation, and detects CPU scaling issues. The library is used internally at Google and across the C++ ecosystem for performance-sensitive code.

What Google Benchmark Does

  • Runs a function repeatedly and reports wall time, CPU time, and throughput
  • Automatically scales iteration count until results are statistically stable
  • Supports parameterized benchmarks with ranges and custom arguments
  • Detects and warns about CPU frequency scaling that can skew results
  • Outputs results in console, JSON, or CSV format for CI integration

Architecture Overview

Each BENCHMARK macro registers a function pointer in a global registry. At runtime the harness calls each function in a loop, doubling iterations until timing variance stabilizes below an internal threshold. The timer uses clock_gettime on Linux and QueryPerformanceCounter on Windows for nanosecond resolution. A statistics module computes mean, median, and standard deviation across repeated trials, optionally applying custom statistics functions.

Self-Hosting & Configuration

  • Install via CMake FetchContent, vcpkg, Conan, or system packages
  • Set --benchmark_filter=BM_Sort to run specific benchmarks by regex
  • Use --benchmark_repetitions=10 and --benchmark_report_aggregates_only for statistical summaries
  • Disable CPU scaling before runs: cpupower frequency-set -g performance
  • Export JSON output with --benchmark_out=results.json for tracking over time

Key Features

  • Automatic iteration scaling ensures consistent timing without manual tuning
  • Parameterized benchmarks sweep across input sizes with Range() and DenseRange()
  • Fixture-based benchmarks share expensive setup across related measurements
  • Custom counters track domain-specific metrics like bytes processed or items per second
  • Comparison mode shows percentage differences between baseline and candidate runs

Comparison with Similar Tools

  • Catch2 BENCHMARK — integrated into Catch2 tests; Google Benchmark offers richer statistics and output formats
  • Hyperfine — benchmarks CLI commands; Google Benchmark measures C++ functions in-process
  • Celero — similar microbenchmark library; Google Benchmark has broader adoption and CI tooling
  • nanobench — single-header with fast compile; Google Benchmark provides more statistical controls
  • perf — Linux system-level profiler; Google Benchmark measures user-defined code sections

FAQ

Q: Does Google Benchmark support multi-threaded benchmarks? A: Yes. Use ->Threads(N) to run a benchmark function across N threads simultaneously.

Q: Can I compare two implementations side by side? A: Use --benchmark_compare or the compare.py tool to produce percentage diffs between JSON result files.

Q: How do I prevent the compiler from optimizing away my benchmark? A: Use benchmark::DoNotOptimize(result) and benchmark::ClobberMemory() to prevent dead-code elimination.

Q: Does it work with Bazel? A: Yes. Google Benchmark is available as a Bazel module and via rules_cc dependencies.

Sources

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets