# Kaniko — Build Container Images in Kubernetes Without a Docker Daemon > Google's tool for building OCI container images from a Dockerfile inside a container or Kubernetes cluster — no privileged daemon required. ## Install Save as a script file and run: # Kaniko — Build Container Images in Kubernetes Without a Docker Daemon ## Quick Use ```bash # One-off local build (still runs in a container) docker run -v $PWD:/workspace -v ~/.docker/config.json:/kaniko/.docker/config.json:ro gcr.io/kaniko-project/executor:latest --dockerfile=Dockerfile --destination=ghcr.io/you/app:v1 --context=dir:///workspace # Inside a Kubernetes Job / GitLab runner pod spec: containers: - name: build image: gcr.io/kaniko-project/executor:latest args: ["--dockerfile=Dockerfile","--destination=registry.example.com/app:$(CI_COMMIT_SHA)"] volumeMounts: - {name: docker-config, mountPath: /kaniko/.docker} ``` ## Introduction Kaniko solves one of the most persistent pain points in cloud-native CI: building container images without giving the build pod access to `/var/run/docker.sock` or running privileged. It executes each Dockerfile instruction in userspace, snapshots the filesystem, and pushes the resulting OCI image to any registry — perfect for hardened, multi-tenant Kubernetes. ## What Kaniko Does - Parses a Dockerfile and executes RUN/COPY/ADD/ENV directives inside its own rootfs - Captures filesystem diffs after each instruction to form image layers - Pushes the final image to Docker Hub, GCR, ECR, GHCR, Harbor, Artifactory - Caches intermediate layers to a registry or cloud bucket (`--cache=true`) - Plugs into GitLab CI, Jenkins, Tekton, Argo Workflows, GitHub Actions ## Architecture Overview The Kaniko executor is a single Go binary shipped as `gcr.io/kaniko-project/executor`. It runs as PID 1 inside a container, unpacks the base image into its own scratch rootfs, and uses a userspace chroot + filesystem snapshotter to execute each layer. Because it never needs CAP_SYS_ADMIN or a Docker daemon, it is safe to run as a normal pod — the key security reason teams moved off DinD. ## Self-Hosting & Configuration - Run as a Kubernetes Job, Tekton TaskRun, or CI runner container — no DaemonSet - Credentials: mount `.docker/config.json` or use workload identity on GKE/EKS - Layer cache: `--cache --cache-repo=registry.example.com/cache` (huge speedup) - Build args: `--build-arg KEY=VAL`; ignore paths with `--ignore-path` - Multi-arch: combine with `buildx`-style manifests or run one Kaniko per arch ## Key Features - Rootless and daemonless — no `privileged: true` - Drop-in compatibility with existing Dockerfiles - Registry-backed layer cache for fast incremental builds - OCI-compliant output (works with cosign, notation, Trivy, etc.) - Maintained by Google under the `GoogleContainerTools` org ## Comparison with Similar Tools - **BuildKit** — faster and more features but needs a daemon or rootless setup; more config - **Buildah** — great CLI, Podman ecosystem, but requires user namespaces in the pod - **img** — similar rootless idea, less active - **Docker-in-Docker** — works but needs privileged mode, which most clusters forbid - **Ko / Jib** — language-specific (Go, Java); no Dockerfile needed but not general-purpose ## FAQ **Q:** Does Kaniko need root? A: No. It runs as UID 0 inside its own container but needs no host privileges or capabilities. **Q:** How do I speed up CI builds? A: Turn on `--cache=true` and point `--cache-repo` at a registry; combine with `--snapshotMode=redo`. **Q:** Can it do multi-stage builds? A: Yes. All `FROM ... AS stage` blocks work; intermediate stages are discarded like with Docker. **Q:** Does it support BuildKit frontends? A: No. Kaniko parses standard Dockerfile syntax only, not `# syntax=docker/dockerfile:1` experimental directives. ## Sources - https://github.com/GoogleContainerTools/kaniko - https://cloud.google.com/build/docs/kaniko-cache --- Source: https://tokrepo.com/en/workflows/55f0aee5-38ef-11f1-9bc6-00163e2b0d79 Author: Script Depot