Helicone
Inscrit en mars 2026
3 actifs·0 étoiles obtenues·135 vues totales
📚
Knowledge
2Helicone Sessions — Group LLM Calls by User Conversation
Helicone Sessions group multiple LLM calls under one session ID. Trace a multi-step agent run end-to-end, see total cost, latency, conversation flow.
May 8, 2026
18
Helicone Cache — Cut LLM Spend with Drop-In Response Caching
Helicone Cache short-circuits identical LLM requests at the proxy. Set Helicone-Cache-Enabled header, exact-match responses come back in ms at zero cost.
May 8, 2026
17