Cette page est affichée en anglais. Une traduction française est en cours.
ScriptsMay 12, 2026·2 min de lecture

Laminar — Open-Source Observability for AI Agents

Open-source observability for AI agents: self-host with Docker Compose, then use the SDK to trace runs, metrics, and outputs end-to-end.

Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Stage only · 29/100Stage only
Surface agent
Tout agent MCP/CLI
Type
Script
Installation
Manual
Confiance
Confiance : Established
Point d'entrée
docker compose
Commande CLI universelle
npx tokrepo install 7a276f53-1d98-4535-9cc3-47cbd6443bae
Introduction

Open-source observability for AI agents: self-host with Docker Compose, then use the SDK to trace runs, metrics, and outputs end-to-end.

  • Best for: agent teams who need repeatable tracing and debugging, not just ad-hoc logs, across prompts, tools, and multi-step runs
  • Works with: Docker Compose for self-hosting; SDK instrumentation in your agent app
  • Setup time: 30–60 minutes

Practical Notes

  • Quant: define 3 core metrics per agent workflow (latency, tool-call count, success rate) and baseline them before you optimize prompts.
  • Quant: keep a replay set of 20 representative runs; compare traces after every change to detect regressions.

Observability-first iteration

If you can’t answer these questions with data, you’re guessing:

  • Which step dominates latency?
  • Which tool calls fail most often?
  • Which prompt change improved success rate vs just “felt better”?

Minimal instrumentation strategy

  1. Trace every run with a stable run id.
  2. Attach tool-call spans with inputs/outputs (redact secrets).
  3. Capture final outcomes (pass/fail + reason).

Don’t drown in dashboards

Start with one workflow and one team. Once the metrics are trusted, scale to more services.

FAQ

Q: Do I need to self-host? A: No. The repo documents self-hosting; teams can choose managed options or local-only usage.

Q: What should I instrument first? A: One end-to-end workflow that currently fails or is slow—make it measurable.

Q: How do I compare prompt changes? A: Use a fixed replay set and compare traces/metrics, not anecdotes.

🙏

Source et remerciements

Source: https://github.com/lmnr-ai/lmnr > License: Apache-2.0 > GitHub stars: 2,875 · forks: 195

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires