What Polars Does
- Eager and lazy evaluation — choose per query
- Query optimization — predicate pushdown, projection pushdown, common subexpression elimination
- Multi-threaded — parallel execution on all cores
- Arrow-native — Apache Arrow columnar format, zero-copy
- Streaming — process larger-than-RAM datasets
- Expressions — composable, type-safe column expressions
- IO — CSV, Parquet, JSON, Arrow IPC, Avro, databases, cloud storage (S3, GCS, Azure)
- SQL interface —
pl.SQLContextfor SQL queries on DataFrames - Group by — fast aggregation with rich expression API
- Window functions — rolling, expanding, partition-based
Architecture
Rust core with Python bindings via PyO3. Lazy mode builds a logical plan → optimizer → physical plan → parallel execution. Data stored in Apache Arrow chunked arrays for cache-friendly, SIMD-accelerated operations.
Comparison
| Library | Language | Speed | Lazy | Memory |
|---|---|---|---|---|
| Polars | Rust + Python | Fastest | Yes | Arrow |
| pandas | Python (C ext) | Slow | No | NumPy |
| Spark DataFrame | Scala/Python | Fast (distributed) | Yes | JVM |
| DuckDB | C++ | Very fast | Yes | Columnar |
| Vaex | C++ + Python | Fast | Yes | Memory-mapped |
FAQ
Q: Polars vs pandas? A: Polars is 5-100x faster on nearly all benchmarks (Rust multithreading vs Python single-threaded). The API is not compatible, but Polars' expression API is more consistent and has fewer pitfalls. Polars is recommended for new projects.
Q: How large of a dataset can it handle? A: Lazy + streaming mode can process datasets far larger than memory. TB-scale Parquet files on a single machine are fine.
Q: Compared to DuckDB? A: Polars is a DataFrame library (Python-first API); DuckDB is a SQL database engine. Both are fast and can complement each other.
Sources
- Docs: https://docs.pola.rs
- GitHub: https://github.com/pola-rs/polars
- License: MIT