ScriptsApr 13, 2026·3 min read

Rayon — Data Parallelism Library for Rust

Rayon makes it trivial to convert sequential Rust code into parallel code. Replace .iter() with .par_iter() and Rayon automatically distributes work across all CPU cores — with zero data races guaranteed by the Rust type system.

TL;DR
Rayon converts sequential Rust iterators into parallel ones with a single method change, guaranteed safe by the type system.
§01

What it is

Rayon is the standard data parallelism library for Rust. It provides parallel iterators that mirror the standard Rust iterator API but distribute work across all available CPU cores automatically. The key advantage is that Rust's type system guarantees no data races at compile time, making parallelism safe by construction.

Rayon targets Rust developers who need to speed up CPU-bound computations without manual thread management. It is used by ripgrep, Polars, Servo, and many other performance-critical Rust projects.

§02

How it saves time or tokens

Rayon's API requires minimal code changes: replace .iter() with .par_iter() or .into_iter() with .into_par_iter(). The library handles thread pool creation, work-stealing scheduling, and load balancing internally. You get multi-core performance without writing thread synchronization code or worrying about data races.

§03

How to use

  1. Add Rayon to your project: cargo add rayon.
  2. Import the prelude: use rayon::prelude::*.
  3. Replace sequential iterator calls with their parallel equivalents (.par_iter(), .par_sort(), .par_chunks()).
§04

Example

use rayon::prelude::*;

fn main() {
    // Sequential: single-core
    let sum: i64 = (0..1_000_000).map(|x| x * x).sum();

    // Parallel: all cores, one-line change
    let par_sum: i64 = (0..1_000_000)
        .into_par_iter()
        .map(|x| x * x)
        .sum();

    // Parallel sort
    let mut data = vec![5, 2, 8, 1, 9, 3];
    data.par_sort();

    // Parallel string processing
    let results: Vec<String> = vec!["hello", "world", "rust"]
        .par_iter()
        .map(|s| s.to_uppercase())
        .collect();
}
§05

Related on TokRepo

§06

Common pitfalls

  • Not all workloads benefit from parallelism; very short iterations may run slower due to thread pool overhead. Benchmark before and after.
  • Rayon's global thread pool defaults to one thread per CPU core; customize with rayon::ThreadPoolBuilder if you need to limit concurrency.
  • Parallel iterators require Send + Sync bounds on the data; closures that capture non-thread-safe types will not compile.

Frequently Asked Questions

How does Rayon prevent data races?+

Rayon leverages Rust's ownership and borrowing system. The ParallelIterator trait requires that closures and data satisfy Send and Sync bounds, which the Rust compiler checks at compile time. If your code compiles, it is free of data races.

What is work-stealing scheduling?+

Rayon uses a work-stealing thread pool where idle threads take tasks from busy threads' queues. This provides automatic load balancing without manual task distribution, even when iteration elements have uneven processing times.

Can I use Rayon with custom thread pools?+

Yes. Use rayon::ThreadPoolBuilder to create custom thread pools with specific thread counts. You can also use pool.install() to run parallel operations on a specific pool instead of the global one.

Does Rayon work with async Rust?+

Rayon is designed for CPU-bound parallelism, not async I/O. You can use Rayon within async code by spawning blocking tasks via tokio::task::spawn_blocking, but do not mix Rayon iterators with async/await directly.

How much speedup can I expect from Rayon?+

Speedup depends on the workload. CPU-bound tasks with independent elements typically see near-linear scaling up to the number of cores. I/O-bound or synchronization-heavy workloads see diminishing returns.

Citations (3)
  • Rayon GitHub— Rayon provides parallel iterators for Rust
  • Rayon Docs— Rayon documentation and API reference
  • Rust Book— Rust Send and Sync traits for thread safety

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets