Quick Use
- PostHog dashboard → Feature Flags → New flag
posthog.get_feature_flag("my-flag", distinct_id=user_id)(Python) orposthog.isFeatureEnabled("my-flag")(JS)- Branch your code based on the variant returned
Intro
PostHog Feature Flags let you toggle features by user, cohort, or rollout percentage. Wrap a new LLM feature behind a flag, ramp 1% → 5% → 20% → 100% over a week, kill it instantly if Helicone shows error spikes. Best for: any team shipping LLM features to real users without yolo deploys. Works with: PostHog SDK in 20+ languages, server-side and client-side. Setup time: 5 minutes.
Define a flag in PostHog
PostHog → Feature Flags → New flag:
- Key:
new-summarization-model - Rollout: 5% of users (sticky by
distinct_id) - Override: enabled for cohort
internal-staff - Variant flag (A/B): 50/50 split between
claude-sonnetandgpt-4o
Server-side (Python)
import posthog
posthog.api_key = os.environ["POSTHOG_API_KEY"]
def summarize(text: str, user_id: str):
variant = posthog.get_feature_flag(
"new-summarization-model",
distinct_id=user_id,
)
if variant == "claude-sonnet":
return claude.summarize(text)
elif variant == "gpt-4o":
return openai_summarize(text)
else:
return legacy_summarize(text) # controlClient-side (TypeScript)
import posthog from "posthog-js";
posthog.init("phc_...", { api_host: "https://us.posthog.com" });
if (posthog.isFeatureEnabled("show-ai-chat")) {
renderChatWidget();
}
const variant = posthog.getFeatureFlag("new-summarization-model");
// "claude-sonnet" | "gpt-4o" | undefined (control)Combine with LLM observability
When you tie flags + LLM traces with the same distinct_id, you get end-to-end:
PostHog dashboard:
- Cohort: users on `new-summarization-model = claude-sonnet` (5% rollout)
- LLM cost p50: $0.012 / call
- LLM error rate: 0.3%
- User satisfaction (custom event): 4.6/5
- Cohort: users on `new-summarization-model = gpt-4o` (5% rollout)
- LLM cost p50: $0.018 / call
- LLM error rate: 1.2%
- User satisfaction: 4.4/5
→ Promote claude-sonnet to 100%Kill switch
# In your incident playbook
posthog.update_feature_flag("new-summarization-model", { "active": False })Within 30 seconds, all server / client SDKs see the flag disabled, and your code falls back to the control branch. No deploy needed.
FAQ
Q: How fast is the kill switch?
A: Server-side SDKs poll every 30 seconds by default. Client-side updates on next page load. For sub-30s reaction, switch to PostHog's bootstrap mode or use webhooks to your own kill-switch infrastructure.
Q: Are flag evaluations free or do they count as events?
A: Flag evaluations are free. They count toward a separate decide quota (much larger than events quota). LLM calls within flag-gated code count as normal LLM events.
Q: Does this work with Edge / serverless? A: Yes — there's a PostHog edge SDK (Cloudflare Workers, Vercel Edge). For very high RPS edge use, prefer client-side flags (lower per-request latency).
Source & Thanks
Built by PostHog. Licensed under MIT.
PostHog/posthog — ⭐ 24,000+