OpenLIT Features
One-Line Auto-Instrumentation
import openlit
openlit.init() # That's it!
# All these are now automatically traced:
# - OpenAI calls
# - Anthropic calls
# - LangChain chains
# - LlamaIndex queries
# - Vector DB operations
# - Embedding generationSupported Providers (50+)
| Category | Providers |
|---|---|
| LLMs | OpenAI, Anthropic, Google, Cohere, Mistral, Ollama, HuggingFace |
| Frameworks | LangChain, LlamaIndex, CrewAI, Haystack, AutoGen |
| Vector DBs | Chroma, Pinecone, Qdrant, Weaviate, Milvus |
| Embeddings | OpenAI, Cohere, HuggingFace, Voyage AI |
| GPUs | NVIDIA (via nvidia-smi monitoring) |
Cost Tracking
# OpenLIT automatically calculates costs per request
# Dashboard shows:
# - Total spend per model
# - Cost per request
# - Token usage breakdown (input vs output)
# - Cost trends over time
# - Budget alertsGuardrails
import openlit
# Add safety guardrails
openlit.init(
guardrails=[
openlit.guardrails.ToxicityGuardrail(threshold=0.8),
openlit.guardrails.PIIGuardrail(),
openlit.guardrails.PromptInjectionGuardrail(),
]
)OpenTelemetry Native
Send data to any OTLP-compatible backend:
import openlit
# Send to Grafana Tempo
openlit.init(
otlp_endpoint="http://grafana:4318",
application_name="my-ai-app",
environment="production",
)
# Or Jaeger, Datadog, New Relic, etc.GPU Monitoring
openlit.init(collect_gpu_stats=True)
# Tracks: GPU utilization, memory usage, temperatureFAQ
Q: What is OpenLIT? A: OpenLIT is an OpenTelemetry-native LLM observability platform with 2,300+ GitHub stars. One line of code instruments 50+ providers, tracking costs, latency, tokens, and quality with a built-in dashboard.
Q: How is OpenLIT different from Langfuse or LangSmith? A: OpenLIT is built on OpenTelemetry (open standard), not a proprietary format. Your data can go to any OTLP backend (Grafana, Datadog, Jaeger). Langfuse and LangSmith use their own data formats and dashboards.
Q: Is OpenLIT free? A: Yes, open-source under Apache-2.0. Self-host for free.