ConfigsMar 31, 2026·2 min read

MLflow — Open Source AI Engineering Platform

MLflow is the largest open-source AI engineering platform for tracing, evaluation, prompt management, and model deployment. 25K+ GitHub stars. 60M+ monthly downloads. Apache 2.0.

TL;DR
MLflow provides tracing, evaluation, prompt management, and model deployment for AI engineering teams.
§01

What it is

MLflow is an open-source platform for managing the full AI and machine learning lifecycle. It covers experiment tracking, model registry, prompt engineering, evaluation, tracing, and deployment. With 25K+ GitHub stars and 60M+ monthly downloads, it is one of the most widely adopted ML operations tools.

Data scientists, ML engineers, and AI application developers who need to track experiments, compare model performance, manage prompts, and deploy models to production benefit from MLflow. It works with any ML library and supports both traditional ML and LLM-based applications.

§02

How it saves time or tokens

MLflow's tracing capability records every LLM call, including prompts, completions, token counts, and latency. This eliminates manual logging and makes it straightforward to identify expensive prompts, compare model outputs, and optimize token usage across an application. The evaluation framework runs automated quality checks against model outputs, catching regressions before deployment rather than in production.

§03

How to use

  1. Install MLflow:
pip install mlflow
  1. Start the MLflow tracking server:
mlflow server --host 0.0.0.0 --port 5000
  1. Log experiments in your training or inference code:
import mlflow

mlflow.set_tracking_uri('http://localhost:5000')
mlflow.set_experiment('my-llm-app')

with mlflow.start_run():
    mlflow.log_param('model', 'claude-sonnet')
    mlflow.log_param('temperature', 0.7)
    mlflow.log_metric('latency_ms', 450)
    mlflow.log_metric('token_count', 1200)
§04

Example

import mlflow
from mlflow.metrics.genai import answer_relevance

# Evaluate LLM outputs automatically
results = mlflow.evaluate(
    data=eval_dataset,
    model=my_llm_pipeline,
    metrics=[answer_relevance()],
    evaluator_config={'judge_model': 'openai:/gpt-4'}
)

print(results.tables['eval_results'])

# Enable auto-tracing for LangChain
mlflow.langchain.autolog()
§05

Related on TokRepo

§06

Common pitfalls

  • MLflow's tracking server stores artifacts locally by default. For production, configure an external artifact store (S3, GCS, Azure Blob) to avoid filling up disk space.
  • Auto-tracing instruments every LLM call. In high-throughput applications, this generates significant storage. Use sampling or filter by experiment to control volume.
  • The model registry and deployment features require additional setup (Docker, Kubernetes, or a cloud provider). The quickstart only covers tracking.

Frequently Asked Questions

Is MLflow free and open source?+

Yes. MLflow is released under the Apache 2.0 license. The core platform including tracking, registry, evaluation, and deployment is fully open source. Databricks offers a managed MLflow service with additional enterprise features.

Does MLflow support LLM tracing?+

Yes. MLflow provides native tracing for LLM applications. It captures prompts, completions, token usage, latency, and tool calls. Auto-tracing integrations exist for LangChain, OpenAI, and other popular LLM frameworks.

How does MLflow compare to Weights & Biases?+

Both track experiments and metrics. MLflow is fully open source and self-hostable. W&B offers a more polished UI and collaboration features but is a commercial product. MLflow has stronger model registry and deployment capabilities; W&B excels at experiment visualization.

Can MLflow evaluate LLM outputs automatically?+

Yes. MLflow includes an evaluation framework with built-in metrics like answer relevance, faithfulness, and toxicity. You can use LLM-as-judge evaluators (GPT-4, Claude) to score outputs automatically against your criteria.

What languages and frameworks does MLflow support?+

MLflow has official SDKs for Python, R, and Java. It integrates with PyTorch, TensorFlow, scikit-learn, XGBoost, LangChain, OpenAI, and dozens of other ML and AI frameworks. The REST API allows integration from any language.

Citations (3)
🙏

Source & Thanks

Created by Databricks. Licensed under Apache 2.0. mlflow/mlflow — 25,000+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets