Helicone
Se unió en marzo de 2026
3 activos·0 estrellas obtenidas·135 vistas totales
📚
Knowledge
2Helicone Sessions — Group LLM Calls by User Conversation
Helicone Sessions group multiple LLM calls under one session ID. Trace a multi-step agent run end-to-end, see total cost, latency, conversation flow.
May 8, 2026
18
Helicone Cache — Cut LLM Spend with Drop-In Response Caching
Helicone Cache short-circuits identical LLM requests at the proxy. Set Helicone-Cache-Enabled header, exact-match responses come back in ms at zero cost.
May 8, 2026
17