# DeepFlow — eBPF Observability for Cloud & AI > DeepFlow offers zero-code eBPF observability for Kubernetes/VMs—flows, metrics, traces, profiling—with OpenTelemetry support and a Docker Compose deploy. ## Install Save as a script file and run: ## Quick Use **Standalone (docker-compose package in repo):** ```bash unset DOCKER_HOST_IP DOCKER_HOST_IP="10.1.2.3" # set to the machine IP you deploy on wget https://deepflow-ce.oss-cn-beijing.aliyuncs.com/pkg/docker-compose/stable/linux/deepflow-docker-compose.tar tar -zxf deepflow-docker-compose.tar sed -i "s|FIX_ME_ALLINONE_HOST_IP|$DOCKER_HOST_IP|g" deepflow-docker-compose/docker-compose.yaml docker-compose -f deepflow-docker-compose/docker-compose.yaml up -d ``` ## Intro DeepFlow offers zero-code eBPF observability for Kubernetes/VMs—flows, metrics, traces, profiling—with OpenTelemetry support and a Docker Compose deploy. - **Best for:** Kubernetes and VM observability with minimal instrumentation - **Works with:** Linux + eBPF; Kubernetes/VMs; OpenTelemetry instrumentation - **Setup time:** 20–60 minutes ## Practical Notes - GitHub: 4,074 stars · 453 forks; pushed 2026-05-12 (verified via GitHub API). - Repo includes `manifests/deepflow-docker-compose/docker-compose.yaml` for all-in-one deployment. - Topics include `opentelemetry`, `kubernetes`, and `llm`, signaling focus on modern cloud + AI workloads. ## Main Use DeepFlow as a **“no-regrets baseline”** before you add app-level tracing everywhere: - Start with the all-in-one Compose deployment to validate data flow end-to-end. - Connect one Kubernetes cluster or one VM pool first; confirm you can see service maps / flows and correlate spikes. - Only then add targeted OpenTelemetry instrumentation for the top 1–3 critical services. For production, plan capacity around your traffic volume (agents can help estimate storage/retention from your p95 throughput) and keep a clear retention policy for traces vs metrics vs logs. ### FAQ **Q: Is DeepFlow only for Kubernetes?** A: No—README and repo structure mention both Kubernetes and host/VM deployments. **Q: Do I still need OpenTelemetry?** A: Often yes for deep app semantics, but DeepFlow can give you useful coverage with minimal instrumentation. **Q: What should I verify first?** A: That the all-in-one deployment is stable and you can correlate a known incident spike across flows/metrics/traces. ## Source & Thanks > Source: https://github.com/deepflowio/deepflow > License: Apache-2.0 > GitHub stars: 4,074 · forks: 453 --- ## 快速使用 **Standalone(仓库提供的 docker-compose 包):** ```bash unset DOCKER_HOST_IP DOCKER_HOST_IP="10.1.2.3" # 改成部署机器的 IP wget https://deepflow-ce.oss-cn-beijing.aliyuncs.com/pkg/docker-compose/stable/linux/deepflow-docker-compose.tar tar -zxf deepflow-docker-compose.tar sed -i "s|FIX_ME_ALLINONE_HOST_IP|$DOCKER_HOST_IP|g" deepflow-docker-compose/docker-compose.yaml docker-compose -f deepflow-docker-compose/docker-compose.yaml up -d ``` ## 简介 DeepFlow 提供零侵入的 eBPF 可观测性:覆盖 K8s/主机的流量、指标、分布式追踪与性能剖析,兼容 OpenTelemetry,并提供一体化 Docker Compose 部署与多种安装方式。 - **适合谁:** K8s/主机全链路观测,尽量少改业务代码 - **可搭配:** Linux + eBPF;Kubernetes/主机;可接 OpenTelemetry - **准备时间:** 20–60 分钟 ## 实战建议 - GitHub:4,074 stars · 453 forks;最近更新 2026-05-12(GitHub API 验证)。 - 仓库包含 `manifests/deepflow-docker-compose/docker-compose.yaml`,可一体化部署。 - topics 覆盖 `opentelemetry` / `kubernetes` / `llm`,适配云原生与 AI 业务的观测需求。 ## 主要内容 把 DeepFlow 当作 **“先铺底、再精修”** 的观测基座: - 先用 all-in-one Compose 跑通数据链路,确认采集/存储/查询完整可用。 - 先接入一个集群或一组主机,验证能看到服务拓扑与流量路径,并能定位性能尖刺。 - 再对最关键的 1–3 个服务做 OpenTelemetry 精细化埋点,避免一上来全量改代码。 上生产时,按流量与保留周期规划容量;对 traces / metrics / logs 设定不同 retention,减少成本与合规风险。 ### FAQ **DeepFlow 只能用于 Kubernetes 吗?** 答:不是。README 与仓库结构同时覆盖 K8s 与主机/VM 的部署场景。 **还需要 OpenTelemetry 吗?** 答:多数情况下需要做关键服务的语义埋点;但 DeepFlow 可先提供“零侵入”的基础覆盖。 **第一步该验证什么?** 答:先确认一体化部署稳定,并能在一次已知问题中把流量/指标/追踪关联起来。 ## 来源与感谢 > Source: https://github.com/deepflowio/deepflow > License: Apache-2.0 > GitHub stars: 4,074 · forks: 453 --- Source: https://tokrepo.com/en/workflows/deepflow-ebpf-observability-for-cloud-ai Author: Script Depot