ConfigsApr 7, 2026·2 min read

Langtrace — Open Source AI Observability Platform

Open-source observability for LLM apps. Trace OpenAI, Anthropic, and LangChain calls with OpenTelemetry-native instrumentation and a real-time dashboard.

TL;DR
Langtrace instruments your LLM calls with OpenTelemetry and shows traces in a real-time dashboard.
§01

What it is

Langtrace is an open-source observability platform purpose-built for LLM-powered applications. It hooks into your existing OpenAI, Anthropic, and LangChain calls and records every request, response, token count, and latency figure using OpenTelemetry-native instrumentation.

If you run AI features in production and need to understand cost, latency, or failure patterns without building your own tracing infrastructure, Langtrace gives you that visibility out of the box.

§02

How it saves time or tokens

Without observability, debugging LLM applications means adding print statements or scrolling through logs. Langtrace captures structured traces automatically, so you skip the manual instrumentation step entirely. It also surfaces token usage per call, letting you spot prompt bloat before it inflates your bill.

§03

How to use

  1. Install the Langtrace SDK in your project and initialize it with your project API key.
  2. Make LLM calls as usual through OpenAI, Anthropic, or LangChain SDKs. Langtrace patches these clients automatically.
  3. Open the Langtrace dashboard to view traces, latency distributions, token usage, and error rates in real time.
§04

Example

from langtrace_python_sdk import langtrace
import openai

# Initialize Langtrace
langtrace.init(api_key='your-api-key')

# Normal OpenAI call — automatically traced
client = openai.OpenAI()
response = client.chat.completions.create(
    model='gpt-4',
    messages=[{'role': 'user', 'content': 'Explain observability'}]
)
print(response.choices[0].message.content)
§05

Related on TokRepo

  • AI Gateway tools — Compare Langtrace with gateway-level observability solutions like Langfuse and Helicone.
  • AI tools for monitoring — Browse other monitoring and tracing tools for AI workloads.
§06

Common pitfalls

  • Forgetting to call langtrace.init() before any LLM client is instantiated means traces never get captured.
  • Running the dashboard locally without a persistent datastore loses traces on restart. Use a hosted setup or configure a durable backend.
  • Assuming Langtrace replaces application-level logging. It traces LLM calls specifically; you still need standard logging for non-LLM code paths.

Frequently Asked Questions

What LLM providers does Langtrace support?+

Langtrace supports OpenAI, Anthropic, and LangChain-based applications out of the box. It uses automatic patching so your existing SDK calls get traced without code changes beyond the initialization step.

Is Langtrace free to use?+

Langtrace is open source. You can self-host the entire platform at no cost. The project uses OpenTelemetry standards, so you can also export traces to any compatible backend you already operate.

How does Langtrace differ from LangSmith?+

LangSmith is tightly coupled to the LangChain ecosystem. Langtrace is provider-agnostic and built on OpenTelemetry, so it works with any LLM client and integrates with existing observability stacks like Jaeger or Grafana.

Does Langtrace add latency to LLM calls?+

Langtrace instruments calls asynchronously. The tracing overhead is negligible because trace data is batched and sent in the background, not on the critical path of your LLM request.

Can I use Langtrace in production?+

Yes. Langtrace is designed for production workloads. It supports batched trace export, configurable sampling rates, and integrates with OpenTelemetry collectors that handle high-throughput environments.

Citations (3)
🙏

Source & Thanks

Created by Scale3 Labs. Licensed under AGPL-3.0.

Scale3-Labs/langtrace — 3k+ stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets