Configs2026年4月8日·1 分钟阅读

Helicone — LLM Observability and Prompt Management

Open-source LLM observability platform. One-line proxy integration for request logging, cost tracking, caching, rate limiting, and prompt versioning across all providers.

What is Helicone?

Open-source LLM observability platform with one-line proxy integration. Includes request logging, cost tracking, caching, rate limiting, and prompt version management.

In one sentence: Open-source LLM observability platform with one-line proxy integration (just change the base URL) — covers logging, cost, caching, and rate limiting — 5k+ GitHub stars.

For: Teams running LLM applications in production who need non-invasive observability.

Core Features

1. Zero-SDK Integration

Just change the base URL — supports OpenAI, Anthropic, Azure.

2. Real-Time Dashboard

Requests, latency, cost, and error rate at a glance.

3. Caching and Rate Limiting

Enable via headers — no code changes needed.

4. Self-Hostable

Deploy with Docker Compose in one command.

FAQ

Q: Latency overhead? A: Proxy adds < 50ms with async logging.

Q: Does it support Claude? A: Yes — change base URL to anthropic.helicone.ai.

🙏

来源与感谢

Helicone/helicone — 5k+ stars, Apache 2.0

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产