API Testing Stack
Bruno, Hoppscotch, Hurl, k6, Artillery, Locust — open-source replacements for Postman + JMeter, no SaaS lock-in.
What's in this pack
This pack collects the six open-source tools that have replaced Postman and JMeter in modern engineering teams. Postman's pricing changes and forced cloud sync since 2023 pushed many teams off the platform; JMeter still works but its XML scripts and Java-only ecosystem feel painful next to JavaScript-native alternatives. The pack splits the surface into three problems: interactive exploration, CI assertions, and load generation.
| # | Asset | Role | Notable feature |
|---|---|---|---|
| 1 | Bruno | Interactive client | Plain-text .bru files, git-versionable collections |
| 2 | Hoppscotch | Interactive client | Browser-first, self-hostable, real-time collaboration |
| 3 | Hurl | CI runner | Plain-text DSL, single binary, perfect for Makefiles |
| 4 | k6 | Load testing | JavaScript scripting, Grafana-native dashboards |
| 5 | Artillery | Load testing | YAML scenarios, Playwright-driven browser load |
| 6 | Locust | Load testing | Pure-Python tests, distributed master-worker model |
The interactive client + CI runner + load generator triad covers the full lifecycle. Bruno or Hoppscotch is what you click around in while writing endpoints. Hurl is what runs in CI to confirm the contract didn't drift. k6 / Artillery / Locust is what runs against staging before each release.
Why move off Postman and JMeter
Three forcing functions converged. Postman went cloud-only by default in 2023 — collections sync to their servers, sharing requires accounts, and the free tier shrank. JMeter's UX is from 2005 — XML configs, GUI-first, no pleasant scripting story. CI demands diffability — Postman exports JSON blobs that produce unreadable PR diffs; Bruno's .bru and Hurl's plain text yield readable diffs that reviewers can sanity-check.
Beyond the migration story, the open-source pack adds capabilities Postman never had:
- Run the same load test in three engines to triangulate (k6 for HTTP-heavy, Artillery for browser flows, Locust when your Python testers want to stay in Python).
- Hurl files double as documentation — paste a
.hurlfile into a runbook and any engineer with curl can read it. - Self-host Hoppscotch to keep enterprise data on your network — Postman's enterprise tier costs per-seat.
Install in one command
# Install the entire pack into the current project
tokrepo install pack/api-testing-stack
# Or pick individual assets
tokrepo install hurl
tokrepo install k6
The TokRepo CLI writes Bruno collection skeletons, a tests/api/*.hurl directory, and a loadtest/ folder with templates for k6, Artillery, and Locust. CI snippets for GitHub Actions and GitLab CI are included per asset, plus a Makefile target so make api-test runs the entire pyramid (Hurl smoke first, k6 baseline second) without anyone needing to remember the flags.
Common pitfalls
- Treating k6 results as production-realistic without warm-up. k6 ramps fast by default; if you start at 1000 VUs, your service spends the first 30 seconds populating caches and your p99 looks worse than it is. Use
stageswith a 30-60s ramp. - Hurl assertions on body text. Assert on JSON paths (
jsonpath "$.user.id") not substrings. Body text comparisons fail spuriously when your serializer changes whitespace or key ordering. - Sharing Bruno collections without an
environments/strategy. Bruno supports per-environment variables; if you commit credentials directly into.brufiles, you've leaked secrets to git history. Usebru envand.env.localpatterns. - Locust at very high RPS on one machine. Locust's GIL means a single Python process caps around 1k-2k RPS; use
--master+ multiple--workerprocesses, or switch to k6 for >5k RPS scenarios. - Forgetting to record p99 alongside throughput. A test that hits 10k RPS at p50=20ms is meaningless if p99=8s. Always export the latency histogram, not just the mean.
Recommended workflow
Once installed, the daily flow looks like this. Engineer drafts an endpoint and explores it in Bruno against a local server, committing the resulting .bru files to the repo. They translate the happy path and two failure cases into a tests/api/users.hurl file — the Hurl tests are now the contract. CI runs the Hurl suite on every PR; failures block merge. Once the endpoint hits staging, a scheduled Prefect job runs the k6 baseline (200 VUs for 5 minutes) and compares p95/p99 against the previous deploy's checkpoint. Regressions trigger a Slack alert with a link to the Grafana dashboard.
Before each major release, the team runs Artillery against staging with a Playwright scenario that mimics real user navigation, then Locust with a 10x scenario to find the cliff. The three engines disagree by 10-20% on raw RPS but always agree on the shape of the curve — which means you trust the inflection point even when individual numbers diverge.
Relationship to other packs
This pack pairs naturally with Workflow Orchestration (run nightly load tests via Inngest or Prefect cron) and Static Site & Docs Builders (publish OpenAPI docs that match what your Hurl tests assert against — a single source of truth across docs and tests). For full observability of test runs, layer in Charts & Data Viz to render k6's CSV output through Apache ECharts or Grafana.
6 assets in this pack
Frequently asked questions
Is this pack free to run end-to-end?
Yes. Every tool is open source under permissive licenses (MIT, Apache 2.0, AGPL for Hoppscotch self-host). k6 has a paid Grafana Cloud k6 service for managed result storage, but the OSS binary runs unlimited tests locally or in your CI. Artillery has a paid pro tier with managed reporting; the core CLI is free. Locust and Hurl are unconditionally free.
How does this compare to Postman?
Postman remains the smoothest experience for first-time API exploration thanks to its mature UX. The open-source pack matches feature parity for collections, environments, and test scripts, with two big advantages: text-based files that diff cleanly in code review, and self-hosting so secrets never leave your network. Trade-off: less mature collaboration UX, no built-in API monitoring (use Workflow Orchestration pack instead).
Will this work with Claude Code or Cursor?
Yes. Claude Code can author Bruno collections, Hurl tests, and k6 scripts from an OpenAPI spec — the TokRepo asset pages include subagent prompts for this. A common workflow: give Claude an openapi.yaml, ask it to generate tests/api/*.hurl covering every endpoint, then run hurl --test tests/api/*.hurl in CI. Cursor users get equivalent rules in .cursor/rules/.
What's the difference between Bruno and Hoppscotch?
Bruno is desktop-first, file-based — collections live on your disk as .bru text files. Best for small teams that already use git as the source of truth. Hoppscotch is browser-first with optional self-host, with a richer real-time collaboration model — best when product managers and QA need to see request results too. Most teams pick one, but they coexist fine if you're migrating gradually.
Operational gotcha when running k6 in CI?
k6 emits exit code 0 even when thresholds fail unless you set --quiet=false and watch stderr — or better, use thresholds with abortOnFail: true per check. Many teams ship green CI runs that secretly violated p99 SLOs because they trusted the exit code. Also: pipe k6's --out to JSON and archive it as a CI artifact so you can compare runs over time.
12 packs · 80+ hand-picked assets
Browse every curated bundle on the home page
Back to all packs