Esta página se muestra en inglés. Una traducción al español está en curso.
ScriptsMay 12, 2026·2 min de lectura

Laminar — Open-Source Observability for AI Agents

Open-source observability for AI agents: self-host with Docker Compose, then use the SDK to trace runs, metrics, and outputs end-to-end.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 29/100Stage only
Superficie agent
Cualquier agent MCP/CLI
Tipo
Script
Instalación
Manual
Confianza
Confianza: Established
Entrada
docker compose
Comando CLI universal
npx tokrepo install 7a276f53-1d98-4535-9cc3-47cbd6443bae
Introducción

Open-source observability for AI agents: self-host with Docker Compose, then use the SDK to trace runs, metrics, and outputs end-to-end.

  • Best for: agent teams who need repeatable tracing and debugging, not just ad-hoc logs, across prompts, tools, and multi-step runs
  • Works with: Docker Compose for self-hosting; SDK instrumentation in your agent app
  • Setup time: 30–60 minutes

Practical Notes

  • Quant: define 3 core metrics per agent workflow (latency, tool-call count, success rate) and baseline them before you optimize prompts.
  • Quant: keep a replay set of 20 representative runs; compare traces after every change to detect regressions.

Observability-first iteration

If you can’t answer these questions with data, you’re guessing:

  • Which step dominates latency?
  • Which tool calls fail most often?
  • Which prompt change improved success rate vs just “felt better”?

Minimal instrumentation strategy

  1. Trace every run with a stable run id.
  2. Attach tool-call spans with inputs/outputs (redact secrets).
  3. Capture final outcomes (pass/fail + reason).

Don’t drown in dashboards

Start with one workflow and one team. Once the metrics are trusted, scale to more services.

FAQ

Q: Do I need to self-host? A: No. The repo documents self-hosting; teams can choose managed options or local-only usage.

Q: What should I instrument first? A: One end-to-end workflow that currently fails or is slow—make it measurable.

Q: How do I compare prompt changes? A: Use a fixed replay set and compare traces/metrics, not anecdotes.

🙏

Fuente y agradecimientos

Source: https://github.com/lmnr-ai/lmnr > License: Apache-2.0 > GitHub stars: 2,875 · forks: 195

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados