# OpenLIT — OpenTelemetry LLM Observability > Monitor LLM costs, latency, and quality with OpenTelemetry-native tracing. GPU monitoring and guardrails built in. 2.3K+ stars. ## Install Save in your project root: # OpenLIT — OpenTelemetry LLM Observability ## Quick Use ```bash pip install openlit ``` ```python import openlit # One line to instrument all LLM calls openlit.init() # Now use any LLM library as usual — OpenLIT traces automatically from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What is RAG?"}] ) # Traces, costs, latency, and token usage are captured automatically ``` Launch the dashboard: ```bash docker run -p 3000:3000 ghcr.io/openlit/openlit ``` --- ## Intro OpenLIT is an OpenTelemetry-native observability platform with 2,300+ GitHub stars purpose-built for LLM applications. With a single `openlit.init()` call, it auto-instruments 50+ LLM providers and frameworks, capturing traces, cost tracking, token usage, latency metrics, and quality evaluations. It includes GPU monitoring, prompt/response guardrails, and a built-in dashboard — all built on open standards (OpenTelemetry) so your data isn't locked into a proprietary vendor. Send traces to any OTLP-compatible backend (Grafana, Datadog, Jaeger). Works with: OpenAI, Anthropic, Google, Ollama, LangChain, LlamaIndex, CrewAI, Haystack, and 50+ more. Best for teams running LLM apps in production who need cost and quality monitoring. Setup time: under 2 minutes. --- ## OpenLIT Features ### One-Line Auto-Instrumentation ```python import openlit openlit.init() # That's it! # All these are now automatically traced: # - OpenAI calls # - Anthropic calls # - LangChain chains # - LlamaIndex queries # - Vector DB operations # - Embedding generation ``` ### Supported Providers (50+) | Category | Providers | |----------|----------| | **LLMs** | OpenAI, Anthropic, Google, Cohere, Mistral, Ollama, HuggingFace | | **Frameworks** | LangChain, LlamaIndex, CrewAI, Haystack, AutoGen | | **Vector DBs** | Chroma, Pinecone, Qdrant, Weaviate, Milvus | | **Embeddings** | OpenAI, Cohere, HuggingFace, Voyage AI | | **GPUs** | NVIDIA (via nvidia-smi monitoring) | ### Cost Tracking ```python # OpenLIT automatically calculates costs per request # Dashboard shows: # - Total spend per model # - Cost per request # - Token usage breakdown (input vs output) # - Cost trends over time # - Budget alerts ``` ### Guardrails ```python import openlit # Add safety guardrails openlit.init( guardrails=[ openlit.guardrails.ToxicityGuardrail(threshold=0.8), openlit.guardrails.PIIGuardrail(), openlit.guardrails.PromptInjectionGuardrail(), ] ) ``` ### OpenTelemetry Native Send data to any OTLP-compatible backend: ```python import openlit # Send to Grafana Tempo openlit.init( otlp_endpoint="http://grafana:4318", application_name="my-ai-app", environment="production", ) # Or Jaeger, Datadog, New Relic, etc. ``` ### GPU Monitoring ```python openlit.init(collect_gpu_stats=True) # Tracks: GPU utilization, memory usage, temperature ``` --- ## FAQ **Q: What is OpenLIT?** A: OpenLIT is an OpenTelemetry-native LLM observability platform with 2,300+ GitHub stars. One line of code instruments 50+ providers, tracking costs, latency, tokens, and quality with a built-in dashboard. **Q: How is OpenLIT different from Langfuse or LangSmith?** A: OpenLIT is built on OpenTelemetry (open standard), not a proprietary format. Your data can go to any OTLP backend (Grafana, Datadog, Jaeger). Langfuse and LangSmith use their own data formats and dashboards. **Q: Is OpenLIT free?** A: Yes, open-source under Apache-2.0. Self-host for free. --- ## Source & Thanks > Created by [OpenLIT](https://github.com/openlit). Licensed under Apache-2.0. > > [openlit](https://github.com/openlit/openlit) — ⭐ 2,300+ --- ## 快速使用 ```bash pip install openlit ``` ```python import openlit openlit.init() # 一行代码自动插桩所有 LLM 调用 ``` --- ## 简介 OpenLIT 是一个拥有 2,300+ GitHub stars 的 OpenTelemetry 原生 LLM 可观测性平台。一行代码自动插桩 50+ LLM 提供商,追踪成本、延迟、Token 用量和质量。内置 GPU 监控和安全护栏。基于开放标准,数据可发送到任何 OTLP 后端。 --- ## 来源与感谢 > Created by [OpenLIT](https://github.com/openlit). Licensed under Apache-2.0. > > [openlit](https://github.com/openlit/openlit) — ⭐ 2,300+ --- Source: https://tokrepo.com/en/workflows/13e3c714-032f-4323-b9ee-69f38e613f45 Author: TokRepo精选