Esta página se muestra en inglés. Una traducción al español está en curso.
Esta página se muestra en inglés. Una traducción al español está en curso.
Observability

Best AI Tools for Monitoring & Observability (2026)

AI observability platforms, LLM evaluation tools, uptime monitoring, and agent debugging dashboards. See inside your AI systems.

30 herramientas
💬

LangSmith — Prompt Debugging and LLM Observability

Debug, test, and monitor LLM applications in production. LangSmith provides trace visualization, prompt playground, dataset evaluation, and regression testing for AI.

Prompt Lab 50Prompts

Latitude — AI Agent Engineering Platform

Open-source platform for building, evaluating, and monitoring AI agents in production. Observability, prompt playground, LLM-as-judge evals, experiment comparison. LGPL-3.0, 4,000+ stars.

AI Open Source 90Scripts

Opik — Debug, Evaluate & Monitor LLM Apps

Trace LLM calls, run automated evaluations, and monitor RAG and agent quality in production. By Comet. 18K+ GitHub stars.

AI Open Source 70Configs

Agenta — Open-Source LLMOps Platform

Prompt playground, evaluation, and observability in one platform. Compare prompts, run evals, trace production calls. 4K+ stars.

Agent Toolkit 60Workflows

Grafana — Open Source Data Visualization & Observability

Grafana is the leading open-source platform for monitoring and observability. Visualize metrics, logs, and traces from Prometheus, Loki, Elasticsearch, and 100+ data sources.

Script Depot 48Scripts

Sentry — Open Source Error Tracking & Performance Monitoring

Sentry is the developer-first error tracking and performance monitoring platform. Capture exceptions, trace performance issues, and debug production errors across all languages.

AI Open Source 56Configs

Gemini CLI Extension: Observability — Monitoring & Logs

Gemini CLI extension for Google Cloud observability. Set up monitoring, analyze logs, create dashboards, and configure alerts.

Skill Factory 56Skills

Axiom MCP — Log Search and Analytics for AI Agents

MCP server that gives AI agents access to Axiom log analytics. Query logs, traces, and metrics through natural language for AI-powered observability and incident response.

MCP Hub 55MCP Configs

Langfuse — Open Source LLM Observability

Langfuse is an open-source LLM engineering platform for tracing, prompt management, evaluation, and debugging AI apps. 24.1K+ GitHub stars. Self-hosted or cloud. MIT.

AI Open Source 55Configs

AgentOps — Observability Dashboard for AI Agents

Python SDK for monitoring AI agent sessions with real-time dashboards, token tracking, cost analysis, and error replay. Two lines of code to instrument any framework. 4,500+ GitHub stars.

Agent Toolkit 54Scripts

OpenObserve — Rust-Based Petabyte-Scale Observability Platform

All-in-one Rust observability platform that ingests logs, metrics, traces and RUM into Parquet on object storage for 140x cheaper retention.

AI Open Source 51Configs
⚙️

OpenLIT — OpenTelemetry LLM Observability

Monitor LLM costs, latency, and quality with OpenTelemetry-native tracing. GPU monitoring and guardrails built in. 2.3K+ stars.

AI Open Source 50Configs

AgentOps — Observability for AI Agents

Python SDK for AI agent monitoring. LLM cost tracking, session replay, benchmarking, and error analysis. Integrates with CrewAI, LangChain, AutoGen, and more. 5.4K+ stars.

Script Depot 50Scripts
📜

Netdata — Real-Time Infrastructure Monitoring & Observability

Netdata is an open-source monitoring agent that collects thousands of metrics per second with zero configuration. Beautiful dashboards, ML-powered alerts, and instant deployment.

Script Depot 49Scripts

Evidently — ML & LLM Monitoring with 100+ Metrics

Evaluate, test, and monitor AI systems with 100+ built-in metrics for data drift, model quality, and LLM output. 7.3K+ stars.

AI Open Source 49Workflows
⚙️

SigNoz — Open Source APM & Observability Platform

SigNoz is an open-source Datadog/New Relic alternative with logs, traces, and metrics in one platform. Native OpenTelemetry support, ClickHouse backend, and powerful dashboards.

AI Open Source 49Configs
⚙️

Phoenix — Open Source AI Observability

Phoenix is an AI observability platform for tracing, evaluating, and debugging LLM apps. 9.1K+ stars. OpenTelemetry, evals, prompt management.

AI Open Source 49Configs

Pixie — eBPF-Based Auto-Instrumentation for Kubernetes Observability

CNCF observability platform that uses eBPF to capture metrics, traces, and logs from every pod with zero code changes.

AI Open Source 46Configs

Sentry MCP — Error Monitoring Server for AI Agents

MCP server that connects AI agents to Sentry for real-time error monitoring. Query issues, analyze stack traces, track regressions, and resolve bugs with full crash context. 2,000+ stars.

MCP Hub 44MCP Configs

OpenAI Agents SDK — Build Multi-Agent Systems in Python

Official OpenAI Python SDK for building multi-agent systems with handoffs, guardrails, and tracing. Agents delegate to specialists, enforce safety rules, and produce observable traces. 8,000+ stars.

Agent Toolkit 44CLI Tools
⚙️

Langtrace — Open Source AI Observability Platform

Open-source observability for LLM apps. Trace OpenAI, Anthropic, and LangChain calls with OpenTelemetry-native instrumentation and a real-time dashboard.

AI Open Source 42Configs
⚙️

Vector — High-Performance Observability Data Pipeline

Vector collects, transforms, and routes logs, metrics, and traces from any source to any destination. Written in Rust, it handles 100x more throughput than Logstash/Fluentd on the same hardware with a unified config language.

AI Open Source 34Configs
⚙️

OpenTelemetry Collector — Vendor-Neutral Telemetry Pipeline

The OpenTelemetry Collector is the CNCF-graduated pipeline for receiving, processing, and exporting metrics, logs, and traces across any observability backend, replacing per-vendor agents with one portable binary.

AI Open Source 27Configs

Coroot — Open Source Observability with AI Root Cause Analysis

Coroot is a self-hosted observability and APM tool that combines metrics, logs, traces, and continuous profiling with eBPF-based auto-instrumentation and AI-powered root cause analysis in predefined dashboards.

AI Open Source 18Configs
⚙️

changedetection.io — Self-Hosted Website Change Monitoring

changedetection.io monitors websites for content changes and sends alerts. Track price drops, restock alerts, content updates, and page modifications automatically.

AI Open Source 97Configs

VoltAgent — TypeScript AI Agent Framework

Open-source TypeScript framework for building AI agents with built-in Memory, RAG, Guardrails, MCP, Voice, and Workflow support. Includes LLM observability console for debugging.

Script Depot 92Scripts
📜

btop — Beautiful Resource Monitor for Linux, macOS, Windows

btop is a resource monitor that shows usage and stats for CPU, memory, disks, network, and processes. Beautiful TUI with mouse support, gradients, themes, and low memory footprint. The modern successor to htop and bashtop.

Script Depot 86Scripts

D3.js — Bring Data to Life with SVG, Canvas & HTML

D3 is the grandparent of data visualization on the web — a low-level toolkit for binding data to DOM, applying data-driven transformations, and building any chart imaginable. Powers the New York Times, Observable, and thousands of dashboards.

Script Depot 79Scripts

Claude Code Agent: K8s Specialist — Kubernetes Operations

Claude Code agent for Kubernetes. Deployment configs, helm charts, troubleshooting, scaling, monitoring, and cluster management.

Skill Factory 75Skills

DeepEval — LLM Testing Framework with 30+ Metrics

DeepEval is a pytest-like testing framework for LLM apps with 30+ metrics. 14.4K+ GitHub stars. RAG, agent, multimodal evaluation. Runs locally. MIT.

Script Depot 74Scripts

AI Observability

AI Observability

As AI moves from prototypes to production, observability becomes critical. You need to know what your AI is doing, why it made a decision, how much it costs, and when it fails. LLM Observability — Opik, Langfuse, and AgentOps provide tracing, logging, and analytics for LLM applications. See every prompt, completion, tool call, and token cost in a unified dashboard.

Agent Debugging — Multi-step AI agents are hard to debug. Observability tools capture the full execution trace — every reasoning step, tool invocation, and decision point — so you can replay and diagnose failures. Evaluation Frameworks — DeepEval, Ragas, and custom eval pipelines measure AI quality systematically. Track accuracy, hallucination rates, latency, and cost across model versions.

Infrastructure Monitoring — Uptime Kuma and Grafana integrations monitor your AI endpoints, alert on degradation, and track SLAs. Essential for production AI services where downtime or quality drops directly impact users.

You can't improve what you can't measure — and AI systems are notoriously hard to measure.

Preguntas frecuentes

What is AI observability?+

AI observability is the practice of monitoring, tracing, and analyzing AI system behavior in production. It goes beyond traditional monitoring (is the server up?) to answer AI-specific questions: Is the model hallucinating? Are responses getting slower? Which prompts produce the best results? How much does each query cost? Tools like Opik and AgentOps provide dashboards that answer these questions in real-time.

How do I debug AI agent failures?+

Use tracing tools that capture the full agent execution: every LLM call, tool invocation, memory access, and decision point. AgentOps and Langfuse visualize these traces as timelines, letting you pinpoint exactly where an agent went wrong. For intermittent failures, set up automated evaluation that flags quality drops before users report them.

What metrics should I track for LLM applications?+

Essential metrics: latency (time to first token, total response time), cost (tokens per request, cost per user), quality (eval scores, hallucination rate, user feedback), and reliability (error rate, timeout rate, retry rate). Advanced: track these metrics per prompt template, per model version, and per user segment to identify regressions quickly.

Explora categorías relacionadas