ConfigsApr 2, 2026·2 min read

OpenLIT — OpenTelemetry LLM Observability

Monitor LLM costs, latency, and quality with OpenTelemetry-native tracing. GPU monitoring and guardrails built in. 2.3K+ stars.

TO
TokRepo精选 · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

```bash pip install openlit ``` ```python import openlit # One line to instrument all LLM calls openlit.init() # Now use any LLM library as usual — OpenLIT traces automatically from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What is RAG?"}] ) # Traces, costs, latency, and token usage are captured automatically ``` Launch the dashboard: ```bash docker run -p 3000:3000 ghcr.io/openlit/openlit ``` ---
Intro
OpenLIT is an OpenTelemetry-native observability platform with 2,300+ GitHub stars purpose-built for LLM applications. With a single `openlit.init()` call, it auto-instruments 50+ LLM providers and frameworks, capturing traces, cost tracking, token usage, latency metrics, and quality evaluations. It includes GPU monitoring, prompt/response guardrails, and a built-in dashboard — all built on open standards (OpenTelemetry) so your data isn't locked into a proprietary vendor. Send traces to any OTLP-compatible backend (Grafana, Datadog, Jaeger). Works with: OpenAI, Anthropic, Google, Ollama, LangChain, LlamaIndex, CrewAI, Haystack, and 50+ more. Best for teams running LLM apps in production who need cost and quality monitoring. Setup time: under 2 minutes. ---
## OpenLIT Features ### One-Line Auto-Instrumentation ```python import openlit openlit.init() # That's it! # All these are now automatically traced: # - OpenAI calls # - Anthropic calls # - LangChain chains # - LlamaIndex queries # - Vector DB operations # - Embedding generation ``` ### Supported Providers (50+) | Category | Providers | |----------|----------| | **LLMs** | OpenAI, Anthropic, Google, Cohere, Mistral, Ollama, HuggingFace | | **Frameworks** | LangChain, LlamaIndex, CrewAI, Haystack, AutoGen | | **Vector DBs** | Chroma, Pinecone, Qdrant, Weaviate, Milvus | | **Embeddings** | OpenAI, Cohere, HuggingFace, Voyage AI | | **GPUs** | NVIDIA (via nvidia-smi monitoring) | ### Cost Tracking ```python # OpenLIT automatically calculates costs per request # Dashboard shows: # - Total spend per model # - Cost per request # - Token usage breakdown (input vs output) # - Cost trends over time # - Budget alerts ``` ### Guardrails ```python import openlit # Add safety guardrails openlit.init( guardrails=[ openlit.guardrails.ToxicityGuardrail(threshold=0.8), openlit.guardrails.PIIGuardrail(), openlit.guardrails.PromptInjectionGuardrail(), ] ) ``` ### OpenTelemetry Native Send data to any OTLP-compatible backend: ```python import openlit # Send to Grafana Tempo openlit.init( otlp_endpoint="http://grafana:4318", application_name="my-ai-app", environment="production", ) # Or Jaeger, Datadog, New Relic, etc. ``` ### GPU Monitoring ```python openlit.init(collect_gpu_stats=True) # Tracks: GPU utilization, memory usage, temperature ``` --- ## FAQ **Q: What is OpenLIT?** A: OpenLIT is an OpenTelemetry-native LLM observability platform with 2,300+ GitHub stars. One line of code instruments 50+ providers, tracking costs, latency, tokens, and quality with a built-in dashboard. **Q: How is OpenLIT different from Langfuse or LangSmith?** A: OpenLIT is built on OpenTelemetry (open standard), not a proprietary format. Your data can go to any OTLP backend (Grafana, Datadog, Jaeger). Langfuse and LangSmith use their own data formats and dashboards. **Q: Is OpenLIT free?** A: Yes, open-source under Apache-2.0. Self-host for free. ---
🙏

Source & Thanks

> Created by [OpenLIT](https://github.com/openlit). Licensed under Apache-2.0. > > [openlit](https://github.com/openlit/openlit) — ⭐ 2,300+

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets