ConfigsApr 14, 2026·3 min read

Candle — Minimalist Machine Learning Framework for Rust

Candle is a Rust-native ML framework focused on inference performance, small binaries, and serverless deployment. It runs Llama, Whisper, Stable Diffusion, and other PyTorch models in pure Rust — no Python required.

TL;DR
Candle runs ML inference in pure Rust with small binaries, no Python required, supporting Llama and Whisper.
§01

What it is

Candle is a Rust-native machine learning framework focused on inference performance, small binaries, and serverless deployment. Built by Hugging Face, it runs Llama, Whisper, Stable Diffusion, and other PyTorch models in pure Rust without requiring Python.

Candle is designed for ML engineers and systems developers who need fast inference with minimal dependencies, especially for edge deployment, WebAssembly targets, or serverless functions where Python runtimes add overhead.

§02

How it saves time or tokens

Candle produces small, self-contained binaries that start in milliseconds compared to Python-based inference servers that load heavy runtimes. No Python dependency chain means no version conflicts, no pip install issues, and predictable builds. CUDA and Metal support provides GPU acceleration on par with PyTorch for supported models.

§03

How to use

  1. Add Candle to your Rust project:
cargo add candle-core candle-nn candle-transformers
  1. Run a pre-built example:
cargo run --example llama -- --prompt 'Hello, world'
  1. Use the tensor API:
use candle_core::{Device, Tensor};

fn main() -> anyhow::Result<()> {
    let device = Device::Cpu;
    let a = Tensor::randn(0f32, 1., (3, 3), &device)?;
    let b = Tensor::randn(0f32, 1., (3, 3), &device)?;
    let c = a.matmul(&b)?;
    println!("{c}");
    Ok(())
}
§04

Example

// Load and run Whisper for speech-to-text
use candle_transformers::models::whisper;

fn transcribe(audio_path: &str) -> anyhow::Result<String> {
    let device = Device::Cpu;
    let model = whisper::model::Whisper::load(
        "openai/whisper-base",
        &device,
    )?;
    let result = model.transcribe(audio_path)?;
    Ok(result.text)
}
§05

Related on TokRepo

§06

Common pitfalls

  • Expecting full training support: Candle is optimized for inference, not large-scale training
  • Not enabling the cuda or metal feature flags for GPU acceleration
  • Trying to load PyTorch checkpoints directly without converting to safetensors format first

Frequently Asked Questions

How does Candle compare to PyTorch?+

Candle focuses on inference with small binaries and fast startup. PyTorch is a full training and inference framework with a massive ecosystem. Use Candle for production inference in Rust; use PyTorch for research and training.

Does Candle support GPU acceleration?+

Yes. Candle supports CUDA (NVIDIA) and Metal (Apple Silicon) through feature flags. Enable them in your Cargo.toml to use GPU acceleration for tensor operations and model inference.

Which models can Candle run?+

Candle supports Llama, Mistral, Whisper, Stable Diffusion, BERT, T5, and many other transformer architectures. The candle-transformers crate provides pre-built model implementations.

Can Candle compile to WebAssembly?+

Yes. Candle's pure Rust implementation allows compilation to WASM for browser-based inference. This enables running ML models directly in the browser without a server.

Who maintains Candle?+

Candle is maintained by Hugging Face as part of their Rust ML ecosystem. It integrates with the Hugging Face Hub for model downloads and uses the safetensors format for model weights.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets