What is Helicone?
Open-source LLM observability platform with one-line proxy integration. Includes request logging, cost tracking, caching, rate limiting, and prompt version management.
In one sentence: Open-source LLM observability platform with one-line proxy integration (just change the base URL) — covers logging, cost, caching, and rate limiting — 5k+ GitHub stars.
For: Teams running LLM applications in production who need non-invasive observability.
Core Features
1. Zero-SDK Integration
Just change the base URL — supports OpenAI, Anthropic, Azure.
2. Real-Time Dashboard
Requests, latency, cost, and error rate at a glance.
3. Caching and Rate Limiting
Enable via headers — no code changes needed.
4. Self-Hostable
Deploy with Docker Compose in one command.
FAQ
Q: Latency overhead? A: Proxy adds < 50ms with async logging.
Q: Does it support Claude?
A: Yes — change base URL to anthropic.helicone.ai.