Key Features
Auto-Instrumentation
Add agentops.init() and all LLM calls are automatically tracked — no code changes needed.
Cost Tracking
Real-time cost per session, per agent, per model. Set budget alerts.
Session Replay
Replay entire agent sessions step-by-step: prompts, responses, tool calls, errors.
Error Analysis
Automatic detection of hallucinations, infinite loops, and failed tool calls.
Benchmarking
Compare agent performance across models, prompts, and configurations.
Dashboard
Web dashboard for team-wide monitoring, analytics, and alerting.
FAQ
Q: What is AgentOps? A: A Python SDK for AI agent observability — cost tracking, session replay, benchmarking, and error analysis. Auto-instruments LLM calls. 5.4K+ stars.
Q: Does AgentOps add latency? A: Minimal — instrumentation is async and non-blocking. Typically <5ms overhead per LLM call.