ConfigsApr 7, 2026·2 min read

Langtrace — Open Source AI Observability Platform

Open-source observability for LLM apps. Trace OpenAI, Anthropic, and LangChain calls with OpenTelemetry-native instrumentation and a real-time dashboard.

AI
AI Open Source · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

pip install langtrace-python-sdk
from langtrace_python_sdk import langtrace
langtrace.init(api_key="your-key")

# All OpenAI/Anthropic calls are now auto-traced
from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=100,
    messages=[{"role": "user", "content": "Hello"}],
)

What is Langtrace?

Langtrace is an open-source observability platform for LLM applications. It auto-instruments calls to OpenAI, Anthropic, LangChain, LlamaIndex, and 20+ providers using OpenTelemetry — giving you traces, latency metrics, token usage, and cost tracking in a real-time dashboard.

Answer-Ready: Langtrace is an open-source AI observability platform that auto-instruments LLM calls to OpenAI, Anthropic, LangChain, and 20+ providers with OpenTelemetry-native tracing, latency metrics, and cost tracking.

Best for: Teams running LLM apps in production who need observability. Works with: OpenAI, Anthropic, LangChain, LlamaIndex, Cohere, Pinecone. Setup time: Under 2 minutes.

Core Features

1. Auto-Instrumentation

One line to trace all LLM calls:

langtrace.init()  # That's it — all calls auto-traced

Supported providers: OpenAI, Anthropic, Google, Cohere, Mistral, Groq, LangChain, LlamaIndex, Pinecone, ChromaDB, Weaviate.

2. OpenTelemetry Native

Export traces to any OTel-compatible backend:

langtrace.init(
    api_key="langtrace-key",
    # Or export to your own collector
    custom_remote_exporter=OTLPSpanExporter(endpoint="http://localhost:4317"),
)

3. Real-Time Dashboard

Self-hosted or cloud dashboard showing:

  • Request traces with full input/output
  • Latency percentiles (p50, p95, p99)
  • Token usage per model
  • Cost breakdown per endpoint
  • Error rates and patterns

4. Evaluation & Testing

from langtrace_python_sdk import with_langtrace_root_span

@with_langtrace_root_span("test-summarization")
def test_summary_quality():
    result = summarize("long text...")
    # Trace includes test metadata
    return result

5. Prompt Management

Track prompt versions alongside traces:

from langtrace_python_sdk import inject_additional_attributes

with inject_additional_attributes({"prompt_version": "v2.3", "experiment": "temp-0.7"}):
    response = client.messages.create(...)

Self-Hosting

git clone https://github.com/Scale3-Labs/langtrace
docker compose up -d
# Dashboard at http://localhost:3000

FAQ

Q: How does it compare to LangFuse? A: Both are open-source LLM observability tools. Langtrace is OpenTelemetry-native (standard traces), LangFuse uses custom tracing. Langtrace has broader auto-instrumentation.

Q: Does it add latency? A: Traces are sent asynchronously. Overhead is < 1ms per call.

Q: Can I use it without the cloud? A: Yes, fully self-hostable with Docker Compose.

🙏

Source & Thanks

Created by Scale3 Labs. Licensed under AGPL-3.0.

Scale3-Labs/langtrace — 3k+ stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.