# Helicone — LLM Observability and Prompt Management > Open-source LLM observability platform. One-line proxy integration for request logging, cost tracking, caching, rate limiting, and prompt versioning across all providers. ## Install Save in your project root: ## Quick Use ```python # Just change the base URL — no SDK needed from openai import OpenAI client = OpenAI( api_key="sk-...", base_url="https://oai.helicone.ai/v1", default_headers={"Helicone-Auth": "Bearer hlc-..."}, ) # All calls are now logged and tracked response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}], ) ``` ## What is Helicone? Helicone is an open-source LLM observability platform that works as a proxy between your app and LLM providers. With a one-line base URL change (no SDK needed), you get request logging, cost tracking, latency metrics, caching, rate limiting, and prompt versioning — for any LLM provider. **Answer-Ready**: Helicone is an open-source LLM observability platform. One-line proxy integration (change base URL, no SDK) for request logging, cost tracking, caching, rate limiting, and prompt versioning across OpenAI, Anthropic, and all providers. 5k+ GitHub stars. **Best for**: Teams running LLM apps in production who need observability without code changes. **Works with**: OpenAI, Anthropic, Google, Azure, any OpenAI-compatible API. **Setup time**: Under 1 minute. ## Core Features ### 1. Zero-SDK Integration Just change the base URL: ```python # OpenAI client = OpenAI(base_url="https://oai.helicone.ai/v1") # Anthropic client = Anthropic(base_url="https://anthropic.helicone.ai") # Azure OpenAI client = AzureOpenAI(azure_endpoint="https://oai.helicone.ai") ``` ### 2. Request Dashboard Real-time dashboard showing: - All requests with input/output - Latency percentiles (p50, p95, p99) - Token usage per model - Cost breakdown per user/feature - Error rates and patterns - Geographic distribution ### 3. Cost Tracking ``` Dashboard view: Today: $42.50 (1,250 requests) This week: $285.30 (8,700 requests) By model: gpt-4o: $180 (40%) claude-sonnet: $85 (30%) gpt-4o-mini: $20 (30%) ``` ### 4. Caching ```python # Enable caching with a header client = OpenAI( base_url="https://oai.helicone.ai/v1", default_headers={ "Helicone-Auth": "Bearer hlc-...", "Helicone-Cache-Enabled": "true", }, ) # Identical requests return cached results instantly ``` ### 5. Rate Limiting ```python headers = { "Helicone-RateLimit-Policy": "10;w=60", # 10 requests per 60 seconds } ``` ### 6. Custom Properties ```python headers = { "Helicone-Property-User": "user-123", "Helicone-Property-Feature": "chat", "Helicone-Property-Environment": "production", } # Filter and group by these properties in the dashboard ``` ### 7. Prompt Versioning ```python headers = { "Helicone-Prompt-Id": "customer-support-v3", } # Track performance per prompt version ``` ## Self-Hosting ```bash git clone https://github.com/Helicone/helicone docker compose up -d # Dashboard at http://localhost:3000 ``` ## FAQ **Q: Does it add latency?** A: Helicone proxy adds < 50ms. Requests are logged asynchronously. **Q: Is my data safe?** A: Self-host for full data control. Cloud version is SOC 2 Type II compliant. **Q: Can I use it with Anthropic Claude?** A: Yes, change base URL to `https://anthropic.helicone.ai`. ## Source & Thanks > Created by [Helicone](https://github.com/Helicone). Licensed under Apache 2.0. > > [Helicone/helicone](https://github.com/Helicone/helicone) — 5k+ stars ## 快速使用 ```python # 只改 base_url,无需 SDK client = OpenAI(base_url="https://oai.helicone.ai/v1") ``` 一行代码启用 LLM 可观测性。 ## 什么是 Helicone? 开源 LLM 可观测性平台,代理模式一行集成。请求日志、成本追踪、缓存、限流和提示词版本管理。 **一句话总结**:开源 LLM 可观测性平台,一行代理集成(改 base URL),日志、成本、缓存、限流全覆盖,5k+ GitHub stars。 **适合人群**:生产环境运行 LLM 应用需要无侵入可观测性的团队。 ## 核心功能 ### 1. 零 SDK 集成 只改 base URL,支持 OpenAI、Anthropic、Azure。 ### 2. 实时仪表盘 请求、延迟、成本、错误率一览。 ### 3. 缓存和限流 Header 开启,无需代码修改。 ### 4. 可自托管 Docker Compose 一键部署。 ## 常见问题 **Q: 延迟开销?** A: 代理增加 < 50ms,异步日志。 **Q: 支持 Claude?** A: 支持,base URL 改为 `anthropic.helicone.ai`。 ## 来源与致谢 > [Helicone/helicone](https://github.com/Helicone/helicone) — 5k+ stars, Apache 2.0 --- Source: https://tokrepo.com/en/workflows/8a35faad-0abb-421b-ae18-e869579db4b4 Author: AI Open Source