Cette page est affichée en anglais. Une traduction française est en cours.
KnowledgeMay 8, 2026·4 min de lecture

Langfuse Self-Hosting — Production Docker Compose Stack

Production Docker Compose for self-hosted Langfuse v3. Postgres, Clickhouse, Redis, MinIO, Worker, Web. Auth, S3 logs, daily backup.

Langfuse
Langfuse · Community
Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Stage only · 15/100Stage only
Surface agent
Tout agent MCP/CLI
Type
Knowledge
Installation
Stage only
Confiance
Confiance : New
Point d'entrée
Asset
Commande CLI universelle
npx tokrepo install 1a651690-c879-4683-a1ce-2c4b7b10e0b4
Introduction

This is a production-ready docker-compose stack for self-hosting Langfuse v3 — Postgres, Clickhouse, Redis, MinIO, Worker, and Web behind a single docker-compose up. Includes SSO config snippets (Google, Okta, GitHub OAuth), S3 log offloading, daily Postgres backup cron, and scaling notes for >1M observations/day workloads. Best for: teams handling sensitive prompt data who can't ship to Langfuse Cloud — healthcare, finance, on-prem regulated environments. Works with: any Linux host with Docker 24+. Setup time: 30 minutes from zero to first trace.


docker-compose.yml core

services:
  langfuse-web:
    image: langfuse/langfuse:3
    environment:
      DATABASE_URL: postgresql://lf:${POSTGRES_PASSWORD}@postgres:5432/langfuse
      CLICKHOUSE_URL: http://clickhouse:8123
      CLICKHOUSE_USER: lf
      CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
      REDIS_URL: redis://redis:6379
      NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
      NEXTAUTH_URL: https://langfuse.example.com
      AUTH_GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
      AUTH_GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
      LANGFUSE_S3_EVENT_UPLOAD_BUCKET: langfuse-events
      LANGFUSE_S3_EVENT_UPLOAD_REGION: us-east-1
    ports: ["3000:3000"]
    depends_on: [postgres, clickhouse, redis, minio]

  langfuse-worker:
    image: langfuse/langfuse-worker:3
    environment: # same as web
      DATABASE_URL: postgresql://lf:${POSTGRES_PASSWORD}@postgres:5432/langfuse
      CLICKHOUSE_URL: http://clickhouse:8123
      REDIS_URL: redis://redis:6379

  postgres:
    image: postgres:16
    volumes: ["pgdata:/var/lib/postgresql/data"]
    environment:
      POSTGRES_USER: lf
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: langfuse

  clickhouse:
    image: clickhouse/clickhouse-server:24
    volumes: ["chdata:/var/lib/clickhouse"]
    environment:
      CLICKHOUSE_USER: lf
      CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}

  redis:
    image: redis:7-alpine

  minio:
    image: minio/minio
    command: server /data --console-address ":9001"
    volumes: ["miniodata:/data"]

volumes: { pgdata: {}, chdata: {}, miniodata: {} }

Daily Postgres backup

0 2 * * * docker exec postgres pg_dump -U lf langfuse | gzip > /backup/lf-$(date +\%F).sql.gz
0 3 * * * find /backup -name "lf-*.sql.gz" -mtime +30 -delete

Scaling thresholds

Workload Setup
<100K observations/day Single-node compose, default sizing
100K–1M Move Clickhouse to a 4-core / 16GB box, separate Redis
1M–10M Multi-replica web/worker, managed Clickhouse cluster, S3 for events
>10M Talk to Langfuse Enterprise

Health checks

curl -f https://langfuse.example.com/api/public/health  # web
docker exec clickhouse clickhouse-client -q "SELECT count() FROM traces"

FAQ

Q: How does Langfuse v3 differ from v2 for self-hosters? A: v3 introduced Clickhouse for traces (v2 used Postgres only), MinIO/S3 for event uploads, and a separate worker service. Migration guide is in the repo. v3 handles 10× the observation volume on the same hardware.

Q: Do I need MinIO if I'm small? A: Below 100K observations/day, no — Langfuse falls back to direct DB writes. MinIO is the queue for high-volume event ingestion. For prod scale or cloud parity, run it. For dev, omit.

Q: What about TLS and reverse proxy? A: Run Caddy or nginx in front. Caddy auto-issues Let's Encrypt: langfuse.example.com { reverse_proxy langfuse-web:3000 } — that's it. NEXTAUTH_URL must match the public HTTPS URL or Google OAuth redirects break.


Quick Use

  1. Copy docker-compose.yml + .env from langfuse/langfuse repo
  2. Generate NEXTAUTH_SECRET (openssl rand -hex 32)
  3. docker compose up -d, open localhost:3000

Intro

This is a production-ready docker-compose stack for self-hosting Langfuse v3 — Postgres, Clickhouse, Redis, MinIO, Worker, and Web behind a single docker-compose up. Includes SSO config snippets (Google, Okta, GitHub OAuth), S3 log offloading, daily Postgres backup cron, and scaling notes for >1M observations/day workloads. Best for: teams handling sensitive prompt data who can't ship to Langfuse Cloud — healthcare, finance, on-prem regulated environments. Works with: any Linux host with Docker 24+. Setup time: 30 minutes from zero to first trace.


docker-compose.yml core

services:
  langfuse-web:
    image: langfuse/langfuse:3
    environment:
      DATABASE_URL: postgresql://lf:${POSTGRES_PASSWORD}@postgres:5432/langfuse
      CLICKHOUSE_URL: http://clickhouse:8123
      CLICKHOUSE_USER: lf
      CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
      REDIS_URL: redis://redis:6379
      NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
      NEXTAUTH_URL: https://langfuse.example.com
      AUTH_GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
      AUTH_GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
      LANGFUSE_S3_EVENT_UPLOAD_BUCKET: langfuse-events
      LANGFUSE_S3_EVENT_UPLOAD_REGION: us-east-1
    ports: ["3000:3000"]
    depends_on: [postgres, clickhouse, redis, minio]

  langfuse-worker:
    image: langfuse/langfuse-worker:3
    environment: # same as web
      DATABASE_URL: postgresql://lf:${POSTGRES_PASSWORD}@postgres:5432/langfuse
      CLICKHOUSE_URL: http://clickhouse:8123
      REDIS_URL: redis://redis:6379

  postgres:
    image: postgres:16
    volumes: ["pgdata:/var/lib/postgresql/data"]
    environment:
      POSTGRES_USER: lf
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: langfuse

  clickhouse:
    image: clickhouse/clickhouse-server:24
    volumes: ["chdata:/var/lib/clickhouse"]
    environment:
      CLICKHOUSE_USER: lf
      CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}

  redis:
    image: redis:7-alpine

  minio:
    image: minio/minio
    command: server /data --console-address ":9001"
    volumes: ["miniodata:/data"]

volumes: { pgdata: {}, chdata: {}, miniodata: {} }

Daily Postgres backup

0 2 * * * docker exec postgres pg_dump -U lf langfuse | gzip > /backup/lf-$(date +\%F).sql.gz
0 3 * * * find /backup -name "lf-*.sql.gz" -mtime +30 -delete

Scaling thresholds

Workload Setup
<100K observations/day Single-node compose, default sizing
100K–1M Move Clickhouse to a 4-core / 16GB box, separate Redis
1M–10M Multi-replica web/worker, managed Clickhouse cluster, S3 for events
>10M Talk to Langfuse Enterprise

Health checks

curl -f https://langfuse.example.com/api/public/health  # web
docker exec clickhouse clickhouse-client -q "SELECT count() FROM traces"

FAQ

Q: How does Langfuse v3 differ from v2 for self-hosters? A: v3 introduced Clickhouse for traces (v2 used Postgres only), MinIO/S3 for event uploads, and a separate worker service. Migration guide is in the repo. v3 handles 10× the observation volume on the same hardware.

Q: Do I need MinIO if I'm small? A: Below 100K observations/day, no — Langfuse falls back to direct DB writes. MinIO is the queue for high-volume event ingestion. For prod scale or cloud parity, run it. For dev, omit.

Q: What about TLS and reverse proxy? A: Run Caddy or nginx in front. Caddy auto-issues Let's Encrypt: langfuse.example.com { reverse_proxy langfuse-web:3000 } — that's it. NEXTAUTH_URL must match the public HTTPS URL or Google OAuth redirects break.


Source & Thanks

Built by Langfuse. Licensed under MIT.

langfuse/langfuse — ⭐ 8,000+

🙏

Source et remerciements

Built by Langfuse. Licensed under MIT.

langfuse/langfuse — ⭐ 8,000+

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires