ConfigsApr 1, 2026·1 min read

Weights & Biases — ML Experiment Tracking

W&B tracks, visualizes, and manages ML experiments and LLM apps. 10.9K+ GitHub stars. Experiment tracking, model versioning, Weave for LLMs. MIT.

TL;DR
Weights and Biases provides experiment tracking, model versioning, and LLM observability for ML teams with a 2-line integration.
§01

What it is

Weights and Biases (W&B) is a platform for tracking, visualizing, and managing machine learning experiments. It captures hyperparameters, metrics, model artifacts, and system resource usage with a lightweight Python integration. The Weave product extends this to LLM application observability.

W&B targets ML engineers and data scientists who need to compare experiments, reproduce results, and collaborate on model development. The Python SDK is MIT licensed.

§02

How it saves time or tokens

Without experiment tracking, ML teams rely on spreadsheets, terminal logs, and naming conventions to keep track of training runs. W&B auto-captures everything: hyperparameters, loss curves, system metrics, code state, and output artifacts. A dashboard lets you compare runs side by side.

The estimated token cost for describing a W&B integration is approximately 331 tokens. The 2-line init captures most information automatically.

§03

How to use

  1. Install and authenticate:
pip install wandb
wandb login
  1. Add tracking to any training script:
import wandb

wandb.init(project='my-project', config={
    'learning_rate': 1e-4,
    'epochs': 10,
    'batch_size': 32
})

for epoch in range(10):
    loss = train_one_epoch()
    wandb.log({'loss': loss, 'epoch': epoch})

wandb.finish()
  1. View results at wandb.ai or your self-hosted instance.
§04

Example

import wandb
from wandb import Artifact

# Track a training run with model artifact
run = wandb.init(project='nlp', name='bert-v2')

# Training loop
for step in range(1000):
    metrics = train_step()
    wandb.log(metrics)

# Save model as an artifact
artifact = Artifact('model-bert-v2', type='model')
artifact.add_dir('checkpoints/')
run.log_artifact(artifact)

# Weave for LLM tracing
import weave
weave.init('my-llm-app')

@weave.op
def generate(prompt: str) -> str:
    return llm.complete(prompt)
§05

Related on TokRepo

§06

Common pitfalls

  • The free tier has limits on storage and team size. Large teams or projects with big artifacts may need a paid plan.
  • wandb.init() creates a network connection to the W&B servers. In air-gapped environments, use offline mode: wandb.init(mode='offline').
  • Auto-logging can capture sensitive information (environment variables, code diffs). Review what is logged before sharing runs publicly.

Frequently Asked Questions

Is W&B free?+

W&B has a free tier for personal projects with unlimited experiments. Team features and increased storage require paid plans. Academic researchers get free access to team features.

How does W&B compare to MLflow?+

Both track experiments and manage models. W&B provides a more polished UI, better visualization tools, and a hosted platform. MLflow is fully open-source and self-hosted. W&B's Weave product extends into LLM observability.

Can I self-host W&B?+

Yes. W&B offers a self-hosted option called W&B Server for teams that need to keep data on-premises. It runs as a Docker container and supports the same features as the cloud version.

Does W&B work with PyTorch and TensorFlow?+

Yes. W&B has integrations for PyTorch, TensorFlow, Keras, JAX, Hugging Face Transformers, and many other ML frameworks. Most frameworks are auto-detected and logged without extra configuration.

What is Weave?+

Weave is W&B's LLM observability product. It traces LLM application calls, tracks prompts and completions, and provides evaluation tools for LLM outputs. Use it to monitor production LLM applications.

Citations (3)
🙏

Source & Thanks

wandb/wandb — 10,900+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets