# Langtrace — Open Source AI Observability Platform > Open-source observability for LLM apps. Trace OpenAI, Anthropic, and LangChain calls with OpenTelemetry-native instrumentation and a real-time dashboard. ## Install Save in your project root: ## Quick Use ```bash pip install langtrace-python-sdk ``` ```python from langtrace_python_sdk import langtrace langtrace.init(api_key="your-key") # All OpenAI/Anthropic calls are now auto-traced from anthropic import Anthropic client = Anthropic() response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=100, messages=[{"role": "user", "content": "Hello"}], ) ``` ## What is Langtrace? Langtrace is an open-source observability platform for LLM applications. It auto-instruments calls to OpenAI, Anthropic, LangChain, LlamaIndex, and 20+ providers using OpenTelemetry — giving you traces, latency metrics, token usage, and cost tracking in a real-time dashboard. **Answer-Ready**: Langtrace is an open-source AI observability platform that auto-instruments LLM calls to OpenAI, Anthropic, LangChain, and 20+ providers with OpenTelemetry-native tracing, latency metrics, and cost tracking. **Best for**: Teams running LLM apps in production who need observability. **Works with**: OpenAI, Anthropic, LangChain, LlamaIndex, Cohere, Pinecone. **Setup time**: Under 2 minutes. ## Core Features ### 1. Auto-Instrumentation One line to trace all LLM calls: ```python langtrace.init() # That's it — all calls auto-traced ``` Supported providers: OpenAI, Anthropic, Google, Cohere, Mistral, Groq, LangChain, LlamaIndex, Pinecone, ChromaDB, Weaviate. ### 2. OpenTelemetry Native Export traces to any OTel-compatible backend: ```python langtrace.init( api_key="langtrace-key", # Or export to your own collector custom_remote_exporter=OTLPSpanExporter(endpoint="http://localhost:4317"), ) ``` ### 3. Real-Time Dashboard Self-hosted or cloud dashboard showing: - Request traces with full input/output - Latency percentiles (p50, p95, p99) - Token usage per model - Cost breakdown per endpoint - Error rates and patterns ### 4. Evaluation & Testing ```python from langtrace_python_sdk import with_langtrace_root_span @with_langtrace_root_span("test-summarization") def test_summary_quality(): result = summarize("long text...") # Trace includes test metadata return result ``` ### 5. Prompt Management Track prompt versions alongside traces: ```python from langtrace_python_sdk import inject_additional_attributes with inject_additional_attributes({"prompt_version": "v2.3", "experiment": "temp-0.7"}): response = client.messages.create(...) ``` ## Self-Hosting ```bash git clone https://github.com/Scale3-Labs/langtrace docker compose up -d # Dashboard at http://localhost:3000 ``` ## FAQ **Q: How does it compare to LangFuse?** A: Both are open-source LLM observability tools. Langtrace is OpenTelemetry-native (standard traces), LangFuse uses custom tracing. Langtrace has broader auto-instrumentation. **Q: Does it add latency?** A: Traces are sent asynchronously. Overhead is < 1ms per call. **Q: Can I use it without the cloud?** A: Yes, fully self-hostable with Docker Compose. ## Source & Thanks > Created by [Scale3 Labs](https://github.com/Scale3-Labs). Licensed under AGPL-3.0. > > [Scale3-Labs/langtrace](https://github.com/Scale3-Labs/langtrace) — 3k+ stars ## 快速使用 ```bash pip install langtrace-python-sdk ``` 一行代码自动追踪所有 LLM 调用。 ## 什么是 Langtrace? Langtrace 是开源 AI 可观测性平台,自动追踪 OpenAI、Anthropic、LangChain 等 20+ 供应商的 LLM 调用,基于 OpenTelemetry 标准。 **一句话总结**:开源 AI 可观测性平台,OpenTelemetry 原生追踪 LLM 调用,支持延迟指标和成本追踪。 **适合人群**:生产环境运行 LLM 应用需要可观测性的团队。 ## 核心功能 ### 1. 自动追踪 一行代码启用,支持 20+ 供应商。 ### 2. OpenTelemetry 原生 导出到任何 OTel 兼容后端。 ### 3. 实时仪表盘 延迟、token 用量、成本、错误率一览。 ### 4. 可自托管 Docker Compose 一键部署。 ## 常见问题 **Q: 和 LangFuse 比较?** A: Langtrace 基于 OpenTelemetry 标准,自动追踪覆盖更广。 **Q: 延迟开销?** A: 异步发送,每次调用 < 1ms。 ## 来源与致谢 > [Scale3-Labs/langtrace](https://github.com/Scale3-Labs/langtrace) — 3k+ stars, AGPL-3.0 --- Source: https://tokrepo.com/en/workflows/a53444d6-2d55-4f59-ba6f-3b672d7ec458 Author: AI Open Source