Cette page est affichée en anglais. Une traduction française est en cours.
ConfigsApr 2, 2026·2 min de lecture

OpenLIT — OpenTelemetry LLM Observability

Monitor LLM costs, latency, and quality with OpenTelemetry-native tracing. GPU monitoring and guardrails built in. 2.3K+ stars.

Introduction

OpenLIT is an OpenTelemetry-native observability platform with 2,300+ GitHub stars purpose-built for LLM applications. With a single openlit.init() call, it auto-instruments 50+ LLM providers and frameworks, capturing traces, cost tracking, token usage, latency metrics, and quality evaluations. It includes GPU monitoring, prompt/response guardrails, and a built-in dashboard — all built on open standards (OpenTelemetry) so your data isn't locked into a proprietary vendor. Send traces to any OTLP-compatible backend (Grafana, Datadog, Jaeger).

Works with: OpenAI, Anthropic, Google, Ollama, LangChain, LlamaIndex, CrewAI, Haystack, and 50+ more. Best for teams running LLM apps in production who need cost and quality monitoring. Setup time: under 2 minutes.


OpenLIT Features

One-Line Auto-Instrumentation

import openlit
openlit.init()  # That's it!

# All these are now automatically traced:
# - OpenAI calls
# - Anthropic calls
# - LangChain chains
# - LlamaIndex queries
# - Vector DB operations
# - Embedding generation

Supported Providers (50+)

Category Providers
LLMs OpenAI, Anthropic, Google, Cohere, Mistral, Ollama, HuggingFace
Frameworks LangChain, LlamaIndex, CrewAI, Haystack, AutoGen
Vector DBs Chroma, Pinecone, Qdrant, Weaviate, Milvus
Embeddings OpenAI, Cohere, HuggingFace, Voyage AI
GPUs NVIDIA (via nvidia-smi monitoring)

Cost Tracking

# OpenLIT automatically calculates costs per request
# Dashboard shows:
# - Total spend per model
# - Cost per request
# - Token usage breakdown (input vs output)
# - Cost trends over time
# - Budget alerts

Guardrails

import openlit

# Add safety guardrails
openlit.init(
    guardrails=[
        openlit.guardrails.ToxicityGuardrail(threshold=0.8),
        openlit.guardrails.PIIGuardrail(),
        openlit.guardrails.PromptInjectionGuardrail(),
    ]
)

OpenTelemetry Native

Send data to any OTLP-compatible backend:

import openlit

# Send to Grafana Tempo
openlit.init(
    otlp_endpoint="http://grafana:4318",
    application_name="my-ai-app",
    environment="production",
)

# Or Jaeger, Datadog, New Relic, etc.

GPU Monitoring

openlit.init(collect_gpu_stats=True)
# Tracks: GPU utilization, memory usage, temperature

FAQ

Q: What is OpenLIT? A: OpenLIT is an OpenTelemetry-native LLM observability platform with 2,300+ GitHub stars. One line of code instruments 50+ providers, tracking costs, latency, tokens, and quality with a built-in dashboard.

Q: How is OpenLIT different from Langfuse or LangSmith? A: OpenLIT is built on OpenTelemetry (open standard), not a proprietary format. Your data can go to any OTLP backend (Grafana, Datadog, Jaeger). Langfuse and LangSmith use their own data formats and dashboards.

Q: Is OpenLIT free? A: Yes, open-source under Apache-2.0. Self-host for free.


🙏

Source et remerciements

Created by OpenLIT. Licensed under Apache-2.0.

openlit — ⭐ 2,300+

Discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires