Cette page est affichée en anglais. Une traduction française est en cours.
ConfigsMay 8, 2026·4 min de lecture

PostHog Feature Flags — Gradually Roll Out AI Features

PostHog Feature Flags toggle features by user, cohort, or percentage. Wrap LLM features behind a flag, A/B different prompts, kill switch on incidents.

PostHog
PostHog · Community
Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Stage only · 17/100Stage only
Surface agent
Tout agent MCP/CLI
Type
Skill
Installation
Stage only
Confiance
Confiance : New
Point d'entrée
Asset
Commande CLI universelle
npx tokrepo install 8d278dd2-b995-4e62-aa95-b4579a34538d
Introduction

PostHog Feature Flags let you toggle features by user, cohort, or rollout percentage. Wrap a new LLM feature behind a flag, ramp 1% → 5% → 20% → 100% over a week, kill it instantly if Helicone shows error spikes. Best for: any team shipping LLM features to real users without yolo deploys. Works with: PostHog SDK in 20+ languages, server-side and client-side. Setup time: 5 minutes.


Define a flag in PostHog

PostHog → Feature Flags → New flag:

  • Key: new-summarization-model
  • Rollout: 5% of users (sticky by distinct_id)
  • Override: enabled for cohort internal-staff
  • Variant flag (A/B): 50/50 split between claude-sonnet and gpt-4o

Server-side (Python)

import posthog

posthog.api_key = os.environ["POSTHOG_API_KEY"]

def summarize(text: str, user_id: str):
    variant = posthog.get_feature_flag(
        "new-summarization-model",
        distinct_id=user_id,
    )

    if variant == "claude-sonnet":
        return claude.summarize(text)
    elif variant == "gpt-4o":
        return openai_summarize(text)
    else:
        return legacy_summarize(text)  # control

Client-side (TypeScript)

import posthog from "posthog-js";

posthog.init("phc_...", { api_host: "https://us.posthog.com" });

if (posthog.isFeatureEnabled("show-ai-chat")) {
  renderChatWidget();
}

const variant = posthog.getFeatureFlag("new-summarization-model");
// "claude-sonnet" | "gpt-4o" | undefined (control)

Combine with LLM observability

When you tie flags + LLM traces with the same distinct_id, you get end-to-end:

PostHog dashboard:
  - Cohort: users on `new-summarization-model = claude-sonnet` (5% rollout)
  - LLM cost p50: $0.012 / call
  - LLM error rate: 0.3%
  - User satisfaction (custom event): 4.6/5

  - Cohort: users on `new-summarization-model = gpt-4o` (5% rollout)
  - LLM cost p50: $0.018 / call
  - LLM error rate: 1.2%
  - User satisfaction: 4.4/5

  → Promote claude-sonnet to 100%

Kill switch

# In your incident playbook
posthog.update_feature_flag("new-summarization-model", { "active": False })

Within 30 seconds, all server / client SDKs see the flag disabled, and your code falls back to the control branch. No deploy needed.


FAQ

Q: How fast is the kill switch? A: Server-side SDKs poll every 30 seconds by default. Client-side updates on next page load. For sub-30s reaction, switch to PostHog's bootstrap mode or use webhooks to your own kill-switch infrastructure.

Q: Are flag evaluations free or do they count as events? A: Flag evaluations are free. They count toward a separate decide quota (much larger than events quota). LLM calls within flag-gated code count as normal LLM events.

Q: Does this work with Edge / serverless? A: Yes — there's a PostHog edge SDK (Cloudflare Workers, Vercel Edge). For very high RPS edge use, prefer client-side flags (lower per-request latency).


Quick Use

  1. PostHog dashboard → Feature Flags → New flag
  2. posthog.get_feature_flag("my-flag", distinct_id=user_id) (Python) or posthog.isFeatureEnabled("my-flag") (JS)
  3. Branch your code based on the variant returned

Intro

PostHog Feature Flags let you toggle features by user, cohort, or rollout percentage. Wrap a new LLM feature behind a flag, ramp 1% → 5% → 20% → 100% over a week, kill it instantly if Helicone shows error spikes. Best for: any team shipping LLM features to real users without yolo deploys. Works with: PostHog SDK in 20+ languages, server-side and client-side. Setup time: 5 minutes.


Define a flag in PostHog

PostHog → Feature Flags → New flag:

  • Key: new-summarization-model
  • Rollout: 5% of users (sticky by distinct_id)
  • Override: enabled for cohort internal-staff
  • Variant flag (A/B): 50/50 split between claude-sonnet and gpt-4o

Server-side (Python)

import posthog

posthog.api_key = os.environ["POSTHOG_API_KEY"]

def summarize(text: str, user_id: str):
    variant = posthog.get_feature_flag(
        "new-summarization-model",
        distinct_id=user_id,
    )

    if variant == "claude-sonnet":
        return claude.summarize(text)
    elif variant == "gpt-4o":
        return openai_summarize(text)
    else:
        return legacy_summarize(text)  # control

Client-side (TypeScript)

import posthog from "posthog-js";

posthog.init("phc_...", { api_host: "https://us.posthog.com" });

if (posthog.isFeatureEnabled("show-ai-chat")) {
  renderChatWidget();
}

const variant = posthog.getFeatureFlag("new-summarization-model");
// "claude-sonnet" | "gpt-4o" | undefined (control)

Combine with LLM observability

When you tie flags + LLM traces with the same distinct_id, you get end-to-end:

PostHog dashboard:
  - Cohort: users on `new-summarization-model = claude-sonnet` (5% rollout)
  - LLM cost p50: $0.012 / call
  - LLM error rate: 0.3%
  - User satisfaction (custom event): 4.6/5

  - Cohort: users on `new-summarization-model = gpt-4o` (5% rollout)
  - LLM cost p50: $0.018 / call
  - LLM error rate: 1.2%
  - User satisfaction: 4.4/5

  → Promote claude-sonnet to 100%

Kill switch

# In your incident playbook
posthog.update_feature_flag("new-summarization-model", { "active": False })

Within 30 seconds, all server / client SDKs see the flag disabled, and your code falls back to the control branch. No deploy needed.


FAQ

Q: How fast is the kill switch? A: Server-side SDKs poll every 30 seconds by default. Client-side updates on next page load. For sub-30s reaction, switch to PostHog's bootstrap mode or use webhooks to your own kill-switch infrastructure.

Q: Are flag evaluations free or do they count as events? A: Flag evaluations are free. They count toward a separate decide quota (much larger than events quota). LLM calls within flag-gated code count as normal LLM events.

Q: Does this work with Edge / serverless? A: Yes — there's a PostHog edge SDK (Cloudflare Workers, Vercel Edge). For very high RPS edge use, prefer client-side flags (lower per-request latency).


Source & Thanks

Built by PostHog. Licensed under MIT.

PostHog/posthog — ⭐ 24,000+

🙏

Source et remerciements

Built by PostHog. Licensed under MIT.

PostHog/posthog — ⭐ 24,000+

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires