ConfigsApr 12, 2026·2 min read

Locust — Scalable Load Testing in Pure Python

Locust is an open-source load testing tool where you define user behavior in plain Python code. Distributed, scalable, and with a real-time web UI for monitoring. No DSL to learn — just write Python.

TL;DR
Locust lets you define load test scenarios as Python code, distribute them across machines, and monitor results in a real-time web UI.
§01

What it is

Locust is an open-source load testing tool where you define user behavior in plain Python code. There is no DSL to learn, no XML configuration, and no GUI-driven test creation. Locust is distributed by design: run workers across multiple machines to generate massive load. A real-time web UI shows request statistics, response times, and failure rates as tests run.

Locust targets Python developers and QA teams who want to write performance tests using familiar Python tools and libraries. It suits API load testing, website performance testing, and any HTTP-based workload.

§02

How it saves time or tokens

Locust tests are Python classes: define tasks as methods, use Python libraries for data generation, and leverage pytest fixtures or factories for test data. The Python ecosystem (requests, faker, databases) is available in your test scripts. No proprietary format means tests are readable, maintainable, and version-controllable.

For AI-assisted development, Locust's Python-based approach means LLMs generate test scripts using familiar Python patterns.

§03

How to use

  1. Install Locust: pip install locust.
  2. Create a locustfile.py defining user classes and tasks.
  3. Run: locust and open the web UI at http://localhost:8089.
§04

Example

from locust import HttpUser, task, between

class WebsiteUser(HttpUser):
    wait_time = between(1, 3)

    @task(3)
    def view_homepage(self):
        self.client.get('/')

    @task(1)
    def view_product(self):
        self.client.get('/api/products/1')

    @task(1)
    def create_order(self):
        self.client.post('/api/orders', json={
            'product_id': 1,
            'quantity': 2,
        })

    def on_start(self):
        self.client.post('/api/login', json={
            'username': 'testuser',
            'password': 'testpass',
        })
§05

Related on TokRepo

§06

Common pitfalls

  • Running all workers on a single machine. Locust workers are CPU-bound; distribute them across machines to generate real load without bottlenecking on the test runner.
  • Not using wait_time between tasks. Without delays, Locust hammers the server unrealistically fast, producing results that do not reflect real user behavior.
  • Ignoring the web UI statistics. The real-time dashboard shows percentile response times, failure rates, and throughput that help identify performance regressions during the test.

Frequently Asked Questions

How does Locust compare to k6?+

Locust uses Python for test scripts and has a web UI. k6 uses JavaScript with a Go runtime. k6 is more performant per machine (Go vs Python). Locust is more accessible to Python teams and allows using any Python library in tests. k6 integrates better with Grafana for visualization. Choose based on your team's language preference.

Can Locust test non-HTTP services?+

Yes. Locust can test any protocol by writing custom clients. The framework provides timing and success/failure event hooks that work with any Python networking library. Community examples exist for gRPC, WebSocket, MQTT, and database load testing.

How does distributed testing work in Locust?+

Run one master process and multiple worker processes. Workers connect to the master via a message queue. The master coordinates test parameters (user count, spawn rate) and aggregates statistics from all workers. Workers can run on different machines. Start workers with locust --worker --master-host=<master-ip>.

Does Locust support headless mode?+

Yes. Run Locust without the web UI using --headless flag with --users and --spawn-rate options. Results are printed to the terminal and can be exported to CSV or JSON. Headless mode is essential for CI/CD pipeline integration.

What is the performance limit of Locust?+

A single Locust worker (one CPU core) handles a few hundred to a few thousand requests per second depending on response time and task complexity. Python's GIL is the bottleneck. For higher throughput, add more workers across machines. k6 or Gatling can generate more load per machine due to Go and JVM performance respectively.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets