# PostHog Feature Flags — Gradually Roll Out AI Features > PostHog Feature Flags toggle features by user, cohort, or percentage. Wrap LLM features behind a flag, A/B different prompts, kill switch on incidents. ## Install Save in your project root: ## Quick Use 1. PostHog dashboard → Feature Flags → New flag 2. `posthog.get_feature_flag("my-flag", distinct_id=user_id)` (Python) or `posthog.isFeatureEnabled("my-flag")` (JS) 3. Branch your code based on the variant returned --- ## Intro PostHog Feature Flags let you toggle features by user, cohort, or rollout percentage. Wrap a new LLM feature behind a flag, ramp 1% → 5% → 20% → 100% over a week, kill it instantly if Helicone shows error spikes. Best for: any team shipping LLM features to real users without yolo deploys. Works with: PostHog SDK in 20+ languages, server-side and client-side. Setup time: 5 minutes. --- ### Define a flag in PostHog PostHog → Feature Flags → New flag: - Key: `new-summarization-model` - Rollout: 5% of users (sticky by `distinct_id`) - Override: enabled for cohort `internal-staff` - Variant flag (A/B): 50/50 split between `claude-sonnet` and `gpt-4o` ### Server-side (Python) ```python import posthog posthog.api_key = os.environ["POSTHOG_API_KEY"] def summarize(text: str, user_id: str): variant = posthog.get_feature_flag( "new-summarization-model", distinct_id=user_id, ) if variant == "claude-sonnet": return claude.summarize(text) elif variant == "gpt-4o": return openai_summarize(text) else: return legacy_summarize(text) # control ``` ### Client-side (TypeScript) ```typescript import posthog from "posthog-js"; posthog.init("phc_...", { api_host: "https://us.posthog.com" }); if (posthog.isFeatureEnabled("show-ai-chat")) { renderChatWidget(); } const variant = posthog.getFeatureFlag("new-summarization-model"); // "claude-sonnet" | "gpt-4o" | undefined (control) ``` ### Combine with LLM observability When you tie flags + LLM traces with the same `distinct_id`, you get end-to-end: ``` PostHog dashboard: - Cohort: users on `new-summarization-model = claude-sonnet` (5% rollout) - LLM cost p50: $0.012 / call - LLM error rate: 0.3% - User satisfaction (custom event): 4.6/5 - Cohort: users on `new-summarization-model = gpt-4o` (5% rollout) - LLM cost p50: $0.018 / call - LLM error rate: 1.2% - User satisfaction: 4.4/5 → Promote claude-sonnet to 100% ``` ### Kill switch ```python # In your incident playbook posthog.update_feature_flag("new-summarization-model", { "active": False }) ``` Within 30 seconds, all server / client SDKs see the flag disabled, and your code falls back to the control branch. No deploy needed. --- ### FAQ **Q: How fast is the kill switch?** A: Server-side SDKs poll every 30 seconds by default. Client-side updates on next page load. For sub-30s reaction, switch to PostHog's `bootstrap` mode or use webhooks to your own kill-switch infrastructure. **Q: Are flag evaluations free or do they count as events?** A: Flag evaluations are free. They count toward a separate `decide` quota (much larger than events quota). LLM calls within flag-gated code count as normal LLM events. **Q: Does this work with Edge / serverless?** A: Yes — there's a PostHog edge SDK (Cloudflare Workers, Vercel Edge). For very high RPS edge use, prefer client-side flags (lower per-request latency). --- ## Source & Thanks > Built by [PostHog](https://github.com/PostHog). Licensed under MIT. > > [PostHog/posthog](https://github.com/PostHog/posthog) — ⭐ 24,000+ --- ## 快速使用 1. PostHog 仪表盘 → Feature Flags → 新建 flag 2. `posthog.get_feature_flag("my-flag", distinct_id=user_id)`(Python)或 `posthog.isFeatureEnabled("my-flag")`(JS) 3. 按返回的 variant 分支代码 --- ## 简介 PostHog Feature Flag 按用户、群组、或推出百分比切换功能。把新 LLM 功能包在 flag 后面,一周内 1% → 5% → 20% → 100% 灰度,Helicone 显示错误飙升时立刻关停。适合任何向真实用户发 LLM 功能、不想 yolo 部署的团队。兼容 20+ 语言的 PostHog SDK,服务端和客户端都行。装机时间 5 分钟。 --- ### 在 PostHog 里定义 flag PostHog → Feature Flags → 新建 flag: - Key:`new-summarization-model` - Rollout:5% 用户(按 `distinct_id` 粘性) - 覆盖:内部员工群组永远启用 - Variant flag(A/B):50/50 在 `claude-sonnet` 和 `gpt-4o` 间分流 ### 服务端(Python) ```python import posthog posthog.api_key = os.environ["POSTHOG_API_KEY"] def summarize(text: str, user_id: str): variant = posthog.get_feature_flag( "new-summarization-model", distinct_id=user_id, ) if variant == "claude-sonnet": return claude.summarize(text) elif variant == "gpt-4o": return openai_summarize(text) else: return legacy_summarize(text) # 对照组 ``` ### 客户端(TypeScript) ```typescript import posthog from "posthog-js"; posthog.init("phc_...", { api_host: "https://us.posthog.com" }); if (posthog.isFeatureEnabled("show-ai-chat")) { renderChatWidget(); } const variant = posthog.getFeatureFlag("new-summarization-model"); // "claude-sonnet" | "gpt-4o" | undefined(对照组) ``` ### 跟 LLM 可观测性结合 flag + LLM trace 用同一个 `distinct_id`,端到端: ``` PostHog 仪表盘: - 群组:`new-summarization-model = claude-sonnet`(5% 灰度)的用户 - LLM 成本 p50:$0.012 / 次 - LLM 错误率:0.3% - 用户满意度(自定义事件):4.6/5 - 群组:`new-summarization-model = gpt-4o`(5% 灰度)的用户 - LLM 成本 p50:$0.018 / 次 - LLM 错误率:1.2% - 用户满意度:4.4/5 → 把 claude-sonnet 提到 100% ``` ### Kill switch ```python # 事故 playbook 里 posthog.update_feature_flag("new-summarization-model", { "active": False }) ``` 30 秒内所有服务端 / 客户端 SDK 看到 flag 关停,代码回退到对照分支。不用部署。 --- ### FAQ **Q: Kill switch 多快?** A: 服务端 SDK 默认每 30 秒轮询。客户端下次页面加载更新。要 30s 以下反应,切到 PostHog 的 `bootstrap` 模式或用 webhook 接到你自己的 kill-switch 基础设施。 **Q: Flag 评估免费还是算事件?** A: Flag 评估免费。它们算进单独的 `decide` 配额(比事件配额大得多)。flag 门控代码里的 LLM 调用算普通 LLM 事件。 **Q: Edge / serverless 能用吗?** A: 能 —— 有 PostHog edge SDK(Cloudflare Workers / Vercel Edge)。极高 RPS 的 edge 用法优先客户端 flag(每请求延迟更低)。 --- ## 来源与感谢 > Built by [PostHog](https://github.com/PostHog). Licensed under MIT. > > [PostHog/posthog](https://github.com/PostHog/posthog) — ⭐ 24,000+ --- Source: https://tokrepo.com/en/workflows/posthog-feature-flags-gradually-roll-out-ai-features Author: PostHog