Esta página se muestra en inglés. Una traducción al español está en curso.
WorkflowsMay 12, 2026·2 min de lectura

Langfuse Python SDK — Trace LLM Apps

Langfuse Python SDK adds tracing and observability to any LLM app via decorators or low-level calls, so you can track latency, cost, and prompts.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Native · 94/100Política: permitir
Superficie agent
Cualquier agent MCP/CLI
Tipo
Cli
Instalación
Single
Confianza
Confianza: Established
Entrada
pip install langfuse
Comando CLI universal
npx tokrepo install 4bc8615f-82d2-5ecf-8842-720c8188357d
Introducción

Langfuse Python SDK adds tracing and observability to any LLM app via decorators or low-level calls, so you can track latency, cost, and prompts.

  • Best for: Python LLM apps that need reliable tracing across prompts, tools, and providers
  • Works with: Python; decorators or low-level events; works with any LLM/provider (per README)
  • Setup time: 5–20 minutes

Practical Notes

  • Per README: SDK v4 rewrite shipped in March 2026 (check the v4 migration guide before upgrading).
  • Start with one endpoint/function, then expand tracing to tool calls and background jobs.
  • Log only what you can keep: scrub secrets and PII in prompts/responses before shipping traces.

Main

How to use it without over-instrumenting:

  1. Pick one “golden path” flow (a user question → tool calls → final answer).
  2. Add tracing at the boundaries: request in, model call out, tool call out, response back.
  3. Record inputs/outputs + timings first. Only add extra metadata (user IDs, tags, datasets) after the baseline works.
  4. Create a simple “regression dashboard”: slowest traces, highest error rate, and largest prompt payloads.

The fastest win is spotting which step burns tokens (retrieval, tool results, or prompt templates) and then trimming that step only.

FAQ

Q: Do I need a specific model/provider? A: No—README says it works with any LLM or framework; focus on consistent trace context instead of vendor-specific fields.

Q: Should I log full prompts? A: Only if allowed. Prefer redaction + sampling for sensitive environments; keep enough context to reproduce failures.

Q: What breaks during upgrades? A: SDK major rewrites can change event shapes. Follow the v4 migration guide before upgrading production services.

🙏

Fuente y agradecimientos

Source: https://github.com/langfuse/langfuse-python > License: MIT > GitHub stars: 399 · forks: 266

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados