Esta página se muestra en inglés. Una traducción al español está en curso.
ConfigsMay 8, 2026·4 min de lectura

PostHog Feature Flags — Gradually Roll Out AI Features

PostHog Feature Flags toggle features by user, cohort, or percentage. Wrap LLM features behind a flag, A/B different prompts, kill switch on incidents.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 17/100Stage only
Superficie agent
Cualquier agent MCP/CLI
Tipo
Skill
Instalación
Stage only
Confianza
Confianza: New
Entrada
Asset
Comando CLI universal
npx tokrepo install 8d278dd2-b995-4e62-aa95-b4579a34538d
Introducción

PostHog Feature Flags let you toggle features by user, cohort, or rollout percentage. Wrap a new LLM feature behind a flag, ramp 1% → 5% → 20% → 100% over a week, kill it instantly if Helicone shows error spikes. Best for: any team shipping LLM features to real users without yolo deploys. Works with: PostHog SDK in 20+ languages, server-side and client-side. Setup time: 5 minutes.


Define a flag in PostHog

PostHog → Feature Flags → New flag:

  • Key: new-summarization-model
  • Rollout: 5% of users (sticky by distinct_id)
  • Override: enabled for cohort internal-staff
  • Variant flag (A/B): 50/50 split between claude-sonnet and gpt-4o

Server-side (Python)

import posthog

posthog.api_key = os.environ["POSTHOG_API_KEY"]

def summarize(text: str, user_id: str):
    variant = posthog.get_feature_flag(
        "new-summarization-model",
        distinct_id=user_id,
    )

    if variant == "claude-sonnet":
        return claude.summarize(text)
    elif variant == "gpt-4o":
        return openai_summarize(text)
    else:
        return legacy_summarize(text)  # control

Client-side (TypeScript)

import posthog from "posthog-js";

posthog.init("phc_...", { api_host: "https://us.posthog.com" });

if (posthog.isFeatureEnabled("show-ai-chat")) {
  renderChatWidget();
}

const variant = posthog.getFeatureFlag("new-summarization-model");
// "claude-sonnet" | "gpt-4o" | undefined (control)

Combine with LLM observability

When you tie flags + LLM traces with the same distinct_id, you get end-to-end:

PostHog dashboard:
  - Cohort: users on `new-summarization-model = claude-sonnet` (5% rollout)
  - LLM cost p50: $0.012 / call
  - LLM error rate: 0.3%
  - User satisfaction (custom event): 4.6/5

  - Cohort: users on `new-summarization-model = gpt-4o` (5% rollout)
  - LLM cost p50: $0.018 / call
  - LLM error rate: 1.2%
  - User satisfaction: 4.4/5

  → Promote claude-sonnet to 100%

Kill switch

# In your incident playbook
posthog.update_feature_flag("new-summarization-model", { "active": False })

Within 30 seconds, all server / client SDKs see the flag disabled, and your code falls back to the control branch. No deploy needed.


FAQ

Q: How fast is the kill switch? A: Server-side SDKs poll every 30 seconds by default. Client-side updates on next page load. For sub-30s reaction, switch to PostHog's bootstrap mode or use webhooks to your own kill-switch infrastructure.

Q: Are flag evaluations free or do they count as events? A: Flag evaluations are free. They count toward a separate decide quota (much larger than events quota). LLM calls within flag-gated code count as normal LLM events.

Q: Does this work with Edge / serverless? A: Yes — there's a PostHog edge SDK (Cloudflare Workers, Vercel Edge). For very high RPS edge use, prefer client-side flags (lower per-request latency).


Quick Use

  1. PostHog dashboard → Feature Flags → New flag
  2. posthog.get_feature_flag("my-flag", distinct_id=user_id) (Python) or posthog.isFeatureEnabled("my-flag") (JS)
  3. Branch your code based on the variant returned

Intro

PostHog Feature Flags let you toggle features by user, cohort, or rollout percentage. Wrap a new LLM feature behind a flag, ramp 1% → 5% → 20% → 100% over a week, kill it instantly if Helicone shows error spikes. Best for: any team shipping LLM features to real users without yolo deploys. Works with: PostHog SDK in 20+ languages, server-side and client-side. Setup time: 5 minutes.


Define a flag in PostHog

PostHog → Feature Flags → New flag:

  • Key: new-summarization-model
  • Rollout: 5% of users (sticky by distinct_id)
  • Override: enabled for cohort internal-staff
  • Variant flag (A/B): 50/50 split between claude-sonnet and gpt-4o

Server-side (Python)

import posthog

posthog.api_key = os.environ["POSTHOG_API_KEY"]

def summarize(text: str, user_id: str):
    variant = posthog.get_feature_flag(
        "new-summarization-model",
        distinct_id=user_id,
    )

    if variant == "claude-sonnet":
        return claude.summarize(text)
    elif variant == "gpt-4o":
        return openai_summarize(text)
    else:
        return legacy_summarize(text)  # control

Client-side (TypeScript)

import posthog from "posthog-js";

posthog.init("phc_...", { api_host: "https://us.posthog.com" });

if (posthog.isFeatureEnabled("show-ai-chat")) {
  renderChatWidget();
}

const variant = posthog.getFeatureFlag("new-summarization-model");
// "claude-sonnet" | "gpt-4o" | undefined (control)

Combine with LLM observability

When you tie flags + LLM traces with the same distinct_id, you get end-to-end:

PostHog dashboard:
  - Cohort: users on `new-summarization-model = claude-sonnet` (5% rollout)
  - LLM cost p50: $0.012 / call
  - LLM error rate: 0.3%
  - User satisfaction (custom event): 4.6/5

  - Cohort: users on `new-summarization-model = gpt-4o` (5% rollout)
  - LLM cost p50: $0.018 / call
  - LLM error rate: 1.2%
  - User satisfaction: 4.4/5

  → Promote claude-sonnet to 100%

Kill switch

# In your incident playbook
posthog.update_feature_flag("new-summarization-model", { "active": False })

Within 30 seconds, all server / client SDKs see the flag disabled, and your code falls back to the control branch. No deploy needed.


FAQ

Q: How fast is the kill switch? A: Server-side SDKs poll every 30 seconds by default. Client-side updates on next page load. For sub-30s reaction, switch to PostHog's bootstrap mode or use webhooks to your own kill-switch infrastructure.

Q: Are flag evaluations free or do they count as events? A: Flag evaluations are free. They count toward a separate decide quota (much larger than events quota). LLM calls within flag-gated code count as normal LLM events.

Q: Does this work with Edge / serverless? A: Yes — there's a PostHog edge SDK (Cloudflare Workers, Vercel Edge). For very high RPS edge use, prefer client-side flags (lower per-request latency).


Source & Thanks

Built by PostHog. Licensed under MIT.

PostHog/posthog — ⭐ 24,000+

🙏

Fuente y agradecimientos

Built by PostHog. Licensed under MIT.

PostHog/posthog — ⭐ 24,000+

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados