ScriptsApr 15, 2026·3 min read

Kubeshark — API Traffic Viewer for Kubernetes

eBPF-powered, sidecar-free traffic viewer for Kubernetes that decodes HTTP, gRPC, Kafka, Redis and more across every pod in the cluster.

TL;DR
Kubeshark captures and decodes L4/L7 traffic between Kubernetes pods in real time using eBPF, no sidecars required.
§01

What it is

Kubeshark is an open-source real-time API traffic viewer for Kubernetes. Think of it as Wireshark for your cluster -- it captures pod-to-pod, pod-to-service, and external traffic at the node level using eBPF and raw sockets, then decodes application protocols in a browser-based UI.

It is built for platform engineers, SREs, and developers who need to debug microservice communication without modifying workloads or installing service mesh sidecars. Kubeshark decodes HTTP/1.1, HTTP/2, gRPC, Kafka, Redis, AMQP, MongoDB wire protocol, DNS, and raw TCP.

§02

How it saves time or tokens

Traditional Kubernetes traffic debugging involves adding sidecar proxies, enabling verbose logging in each service, or deploying a full service mesh. Kubeshark eliminates all of that. A single kubeshark tap command deploys a DaemonSet that immediately starts capturing traffic across all pods in the cluster.

The built-in query language (KFL) lets you filter traffic in real time, so you can isolate the exact API call causing issues without sifting through logs. You can also export to PCAP files for offline analysis in Wireshark.

§03

How to use

  1. Install the CLI: sh <(curl -Ls https://kubeshark.co/install).
  2. Run kubeshark tap against your current kube-context to capture traffic across all pods, or scope to a namespace with kubeshark tap -n my-ns.
  3. Open the browser UI at http://localhost:8899 to view decoded traffic with filters.
§04

Example

# Install the CLI
sh <(curl -Ls https://kubeshark.co/install)

# Tap all pods in the cluster
kubeshark tap

# Tap a specific namespace
kubeshark tap -n my-namespace

# Tap pods matching a pattern
kubeshark tap 'api-server.*'

# Open the UI at http://localhost:8899
# Use KFL to filter: http and request.path == '/api/v1/users'
§05

Related on TokRepo

§06

Common pitfalls

  • Kubeshark requires privileged DaemonSet pods with host network access; some managed Kubernetes providers restrict this by default.
  • On high-traffic clusters, capturing all traffic can generate significant data volume -- use KFL filters or namespace scoping to reduce noise.
  • eBPF-based capture requires a Linux kernel 4.16 or later; older kernels fall back to raw socket capture with reduced protocol support.

Frequently Asked Questions

Does Kubeshark require a service mesh?+

No. Kubeshark operates independently of any service mesh. It captures traffic at the node level using eBPF and raw sockets, so you do not need Istio, Linkerd, or any sidecar proxy installed.

What protocols does Kubeshark decode?+

Kubeshark decodes HTTP/1.1, HTTP/2, gRPC, Kafka, Redis, AMQP, MongoDB wire protocol, DNS, and raw TCP. New protocol decoders can be added through the extension system.

Can I use Kubeshark in production?+

Yes, but with care. Kubeshark captures traffic at the node level, which has minimal performance overhead. However, on very high-traffic clusters you should scope capture to specific namespaces or pod patterns to limit resource usage.

How does the KFL query language work?+

KFL (Kubeshark Filter Language) lets you filter captured traffic in real time using expressions like 'http and request.path contains /api'. It supports logical operators, field comparisons, and regex matching.

Can I export captures for offline analysis?+

Yes. Kubeshark can record traffic to PCAP files that you can open in Wireshark or other network analysis tools for detailed offline investigation.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets