# Aim — Open-Source ML Experiment Tracker with Rich Visualizations > Aim is a self-hosted experiment tracking tool for machine learning that provides a high-performance UI for comparing runs, visualizing metrics, and exploring hyperparameters across thousands of experiments. ## Install Save in your project root: # Aim — Open-Source ML Experiment Tracker with Rich Visualizations ## Quick Use ```bash pip install aim aim init ``` ```python from aim import Run run = Run() run["hparams"] = {"lr": 0.001, "batch_size": 32} for step in range(100): run.track(loss_value, name="loss", step=step) # Launch the UI: # aim up ``` ## Introduction Aim is an open-source experiment tracking platform for ML practitioners who need to compare hundreds or thousands of training runs. It stores metrics, hyperparameters, and artifacts locally and serves a rich web UI for interactive exploration, filtering, and comparison. ## What Aim Does - Tracks metrics, hyperparameters, images, audio, and text across training runs - Serves a performant web UI for comparing runs with interactive charts - Stores all data locally in a high-performance embedded database - Integrates with PyTorch, TensorFlow, Keras, Hugging Face, and other frameworks - Supports querying runs programmatically with a Python SDK ## Architecture Overview Aim stores experiment data in a custom embedded database optimized for time-series metrics. The tracking SDK writes metrics and metadata during training with minimal overhead. The web UI reads from this database and renders interactive visualizations using a React frontend. Queries use Aim's query language (AimQL) for filtering runs by metric values, hyperparameters, or metadata. ## Self-Hosting & Configuration - Install from PyPI and initialize a repository with aim init in your project directory - Track experiments by creating a Run object and calling run.track() for each metric - Launch the dashboard with aim up to browse experiments at localhost:43800 - Configure remote tracking by running the Aim server and pointing clients to its address - Set storage location via the AIMREPOS environment variable or CLI flags ## Key Features - Interactive UI handles thousands of runs without lag thanks to the custom storage engine - Side-by-side metric comparison with grouping, smoothing, and aggregation controls - Built-in explorers for images, audio, text, and distribution visualizations - Framework integrations (PyTorch Lightning, Hugging Face, Keras) require minimal code changes - Fully self-hosted with no external dependencies or cloud accounts required ## Comparison with Similar Tools - **Weights & Biases** — cloud-hosted with more features; Aim is fully self-hosted and free - **MLflow** — broader MLOps scope; Aim's UI is more interactive for metric comparison - **TensorBoard** — lightweight but limited querying; Aim scales better with many runs - **ClearML** — full MLOps platform; heavier to set up for pure experiment tracking ## FAQ **Q: How does Aim compare to Weights & Biases?** A: Aim provides similar experiment tracking and visualization but runs entirely self-hosted with no cloud dependency or usage limits. **Q: Does Aim slow down training?** A: Tracking overhead is minimal. Aim writes metrics asynchronously and uses an optimized storage format designed for high-frequency writes. **Q: Can I use Aim with Hugging Face Trainer?** A: Yes. Import AimCallback from aim.hugging_face and pass it to the Trainer callbacks list. **Q: How do I share experiments with my team?** A: Run aim up --host 0.0.0.0 to expose the dashboard on your network, or deploy the Aim server for centralized tracking. ## Sources - https://github.com/aimhubio/aim - https://aimstack.readthedocs.io/ --- Source: https://tokrepo.com/en/workflows/65861f25-3fdb-11f1-9bc6-00163e2b0d79 Author: AI Open Source