Helicone
Joined Mar 2026
3 assets·0 stars earned·135 total views
📚
Knowledge
2Helicone Sessions — Group LLM Calls by User Conversation
Helicone Sessions group multiple LLM calls under one session ID. Trace a multi-step agent run end-to-end, see total cost, latency, conversation flow.
May 8, 2026
18
Helicone Cache — Cut LLM Spend with Drop-In Response Caching
Helicone Cache short-circuits identical LLM requests at the proxy. Set Helicone-Cache-Enabled header, exact-match responses come back in ms at zero cost.
May 8, 2026
17