ConfigsApr 15, 2026·3 min read

Pixie — eBPF-Based Auto-Instrumentation for Kubernetes Observability

CNCF observability platform that uses eBPF to capture metrics, traces, and logs from every pod with zero code changes.

TL;DR
Pixie uses eBPF to capture HTTP, gRPC, SQL, and DNS traffic from every Kubernetes pod with zero code changes.
§01

What it is

Pixie is a CNCF sandbox project that instruments Kubernetes workloads using eBPF probes attached at the kernel boundary. There are no agents to import, no SDKs to adopt, and no sampling. Pixie captures full-fidelity HTTP, gRPC, MySQL, Postgres, Redis, DNS, and TLS traffic automatically from every pod on every node.

Pixie is designed for platform engineering teams running Kubernetes who need instant observability without modifying application code. It runs as a DaemonSet with data staying inside the cluster.

§02

How it saves time or tokens

Traditional observability requires adding instrumentation libraries to every service, configuring exporters, and managing sampling rates. Pixie eliminates all of that. Deploy it once, and it captures everything at the kernel level. There is no code change, no redeployment, and no per-service configuration. Debugging sessions that previously required log correlation across services become single PxL script queries.

§03

How to use

  1. Install the Pixie CLI.
  2. Deploy Pixie to your Kubernetes cluster using your kubeconfig.
  3. Run built-in scripts to view live traffic.
# Install the CLI
bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"

# Deploy to your cluster
px deploy

# View live HTTP traffic by service
px run px/http_data

# Open the live UI
px live http_data
§04

Example

Querying HTTP request latency by service using PxL (Pixie's query language):

import px

df = px.DataFrame(table='http_events', start_time='-5m')
df.service = df.ctx['service']
df.latency_ms = df.resp_latency_ns / 1e6
df = df.groupby('service').agg(
    p50=('latency_ms', px.quantiles, 0.5),
    p99=('latency_ms', px.quantiles, 0.99),
    count=('latency_ms', px.count),
)
px.display(df)

This returns p50 and p99 latency for every service in the cluster without any prior instrumentation.

§05

Related on TokRepo

§06

Common pitfalls

  • Pixie requires Linux kernel 4.14+ with eBPF support. Some managed Kubernetes providers restrict eBPF capabilities.
  • Data stays in-cluster by default, which means limited retention (typically 24 hours). For long-term storage, export to an external backend.
  • PxL scripts have a learning curve. Start with the built-in scripts before writing custom queries.

Frequently Asked Questions

Does Pixie require code changes?+

No. Pixie uses eBPF probes at the kernel level to capture network traffic automatically. There are no SDKs, no library imports, and no annotation requirements. Deploy Pixie to your cluster and it starts capturing data immediately.

What protocols does Pixie capture?+

Pixie automatically captures HTTP/1.1, HTTP/2, gRPC, MySQL, PostgreSQL, Cassandra, Redis, DNS, Kafka, and AMQP traffic. TLS-encrypted traffic is captured by probing the application's TLS library before encryption, so no certificate configuration is needed.

Where does Pixie store data?+

Data stays inside the cluster on each node's memory. Pixie does not send telemetry to external servers by default. Retention is limited by available memory, typically covering the last 24 hours. You can export data to Grafana, Elastic, or other backends for long-term storage.

How does Pixie compare to Datadog or New Relic?+

Datadog and New Relic are SaaS platforms requiring per-service agents and instrumentation. Pixie is open source, runs entirely in-cluster, and uses eBPF for zero-instrumentation capture. Pixie is free but has limited retention; commercial APMs offer longer retention and advanced analytics.

What is PxL?+

PxL is Pixie's query language, based on Python syntax. It operates on DataFrames of captured telemetry data. Built-in scripts cover common queries like HTTP latency, DNS errors, and service maps. You can write custom PxL scripts for specific debugging scenarios.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets