What is Langtrace?
Langtrace is an open-source observability platform for LLM applications. It auto-instruments calls to OpenAI, Anthropic, LangChain, LlamaIndex, and 20+ providers using OpenTelemetry — giving you traces, latency metrics, token usage, and cost tracking in a real-time dashboard.
Answer-Ready: Langtrace is an open-source AI observability platform that auto-instruments LLM calls to OpenAI, Anthropic, LangChain, and 20+ providers with OpenTelemetry-native tracing, latency metrics, and cost tracking.
Best for: Teams running LLM apps in production who need observability. Works with: OpenAI, Anthropic, LangChain, LlamaIndex, Cohere, Pinecone. Setup time: Under 2 minutes.
Core Features
1. Auto-Instrumentation
One line to trace all LLM calls:
langtrace.init() # That's it — all calls auto-tracedSupported providers: OpenAI, Anthropic, Google, Cohere, Mistral, Groq, LangChain, LlamaIndex, Pinecone, ChromaDB, Weaviate.
2. OpenTelemetry Native
Export traces to any OTel-compatible backend:
langtrace.init(
api_key="langtrace-key",
# Or export to your own collector
custom_remote_exporter=OTLPSpanExporter(endpoint="http://localhost:4317"),
)3. Real-Time Dashboard
Self-hosted or cloud dashboard showing:
- Request traces with full input/output
- Latency percentiles (p50, p95, p99)
- Token usage per model
- Cost breakdown per endpoint
- Error rates and patterns
4. Evaluation & Testing
from langtrace_python_sdk import with_langtrace_root_span
@with_langtrace_root_span("test-summarization")
def test_summary_quality():
result = summarize("long text...")
# Trace includes test metadata
return result5. Prompt Management
Track prompt versions alongside traces:
from langtrace_python_sdk import inject_additional_attributes
with inject_additional_attributes({"prompt_version": "v2.3", "experiment": "temp-0.7"}):
response = client.messages.create(...)Self-Hosting
git clone https://github.com/Scale3-Labs/langtrace
docker compose up -d
# Dashboard at http://localhost:3000FAQ
Q: How does it compare to LangFuse? A: Both are open-source LLM observability tools. Langtrace is OpenTelemetry-native (standard traces), LangFuse uses custom tracing. Langtrace has broader auto-instrumentation.
Q: Does it add latency? A: Traces are sent asynchronously. Overhead is < 1ms per call.
Q: Can I use it without the cloud? A: Yes, fully self-hostable with Docker Compose.