Quick Use
- Sign up at posthog.com → copy project API key
pip install "posthog[ai]"- Replace
from openai import OpenAIwithfrom posthog.ai.openai import OpenAI— passposthog_client=posthog
Intro
PostHog's LLM Observability traces every LLM call your app makes — model, prompt, response, latency, cost — and ties them to user sessions and feature flags. The wrapper auto-instruments OpenAI / Anthropic / LangChain / Vercel AI SDK calls; you don't change call sites. Best for: production teams who already use PostHog for product analytics and want LLM traces in the same dashboard. Works with: Python / Node SDK, OpenAI / Anthropic / LangChain. Setup time: 5 minutes.
Drop in (Python)
import posthog
from posthog.ai.openai import OpenAI
posthog.api_key = os.environ["POSTHOG_API_KEY"]
posthog.host = "https://us.posthog.com"
# Drop-in replacement — same API as openai.OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"], posthog_client=posthog)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
posthog_distinct_id="user_42", # tie to a user
posthog_properties={"feature": "chat"}, # custom metadata
)PostHog auto-tracks model, latency, input/output tokens, cost (using its built-in pricing table), and any error.
Anthropic / LangChain wrappers
from posthog.ai.anthropic import Anthropic
from posthog.ai.langchain import CallbackHandler
# Anthropic
client = Anthropic(api_key=..., posthog_client=posthog)
# LangChain
callbacks = [CallbackHandler(client=posthog, distinct_id="user_42")]
chain.invoke({"input": "..."}, config={"callbacks": callbacks})What you get in the dashboard
- Per-call: prompt, response, model, latency, cost, error, properties
- Aggregates: cost by model / by feature / by user, p95 latency, error rate
- Funnels: combine LLM events with normal product events ("user opened feature → 3 LLM calls → did/didn't convert")
- Cohorts: filter LLM cost by user tier, signup date, etc
Why use PostHog instead of Helicone or Langfuse
PostHog is general product analytics with LLM as a layer. If your team already uses PostHog for funnels, retention, feature flags — adding LLM traces gives you correlation across all data. Helicone / Langfuse are LLM-specific (deeper LLM features, but no product-analytics overlap).
FAQ
Q: Is PostHog free? A: Yes — generous free tier (1M events/month for product analytics, 100K LLM events/month). Self-hosted is fully free under MIT license. Cloud paid tiers add longer retention and team features.
Q: Will it slow down my LLM calls? A: Negligible. The wrapper sends events asynchronously in the background — your call returns the same time it would without instrumentation. PostHog buffers and batches the trace upload.
Q: Can I exclude PII from prompts/responses?
A: Yes — pass posthog_privacy_mode=True to redact prompt/response bodies (only metadata is sent), or use property scrubbing on the PostHog side. Required for healthcare / finance compliance.
Source & Thanks
Built by PostHog. Licensed under MIT.
PostHog/posthog — ⭐ 24,000+