Cette page est affichée en anglais. Une traduction française est en cours.
WorkflowsMay 12, 2026·2 min de lecture

Langfuse Python SDK — Trace LLM Apps

Langfuse Python SDK adds tracing and observability to any LLM app via decorators or low-level calls, so you can track latency, cost, and prompts.

Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Native · 94/100Policy : autoriser
Surface agent
Tout agent MCP/CLI
Type
Cli
Installation
Single
Confiance
Confiance : Established
Point d'entrée
pip install langfuse
Commande CLI universelle
npx tokrepo install 4bc8615f-82d2-5ecf-8842-720c8188357d
Introduction

Langfuse Python SDK adds tracing and observability to any LLM app via decorators or low-level calls, so you can track latency, cost, and prompts.

  • Best for: Python LLM apps that need reliable tracing across prompts, tools, and providers
  • Works with: Python; decorators or low-level events; works with any LLM/provider (per README)
  • Setup time: 5–20 minutes

Practical Notes

  • Per README: SDK v4 rewrite shipped in March 2026 (check the v4 migration guide before upgrading).
  • Start with one endpoint/function, then expand tracing to tool calls and background jobs.
  • Log only what you can keep: scrub secrets and PII in prompts/responses before shipping traces.

Main

How to use it without over-instrumenting:

  1. Pick one “golden path” flow (a user question → tool calls → final answer).
  2. Add tracing at the boundaries: request in, model call out, tool call out, response back.
  3. Record inputs/outputs + timings first. Only add extra metadata (user IDs, tags, datasets) after the baseline works.
  4. Create a simple “regression dashboard”: slowest traces, highest error rate, and largest prompt payloads.

The fastest win is spotting which step burns tokens (retrieval, tool results, or prompt templates) and then trimming that step only.

FAQ

Q: Do I need a specific model/provider? A: No—README says it works with any LLM or framework; focus on consistent trace context instead of vendor-specific fields.

Q: Should I log full prompts? A: Only if allowed. Prefer redaction + sampling for sensitive environments; keep enough context to reproduce failures.

Q: What breaks during upgrades? A: SDK major rewrites can change event shapes. Follow the v4 migration guide before upgrading production services.

🙏

Source et remerciements

Source: https://github.com/langfuse/langfuse-python > License: MIT > GitHub stars: 399 · forks: 266

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires