ConfigsApr 16, 2026·3 min read

k3d — Run K3s Kubernetes Clusters Inside Docker

k3d wraps Rancher's K3s Kubernetes distribution into Docker containers, letting developers spin up ephemeral multi-node clusters in seconds for development, CI, and GitOps experimentation.

TL;DR
k3d runs K3s Kubernetes clusters inside Docker containers, giving you disposable multi-node clusters in seconds.
§01

What it is

k3d is a lightweight wrapper around Rancher's K3s distribution that runs entire Kubernetes clusters inside Docker containers. Instead of provisioning VMs or paying for cloud clusters, you get a multi-node K8s environment on your laptop in under 30 seconds.

It targets developers who need fast, disposable clusters for testing Helm charts, validating manifests, running integration tests in CI, or experimenting with GitOps workflows without touching production infrastructure.

§02

How it saves time or tokens

Traditional local Kubernetes options like minikube or kind require heavier setup and slower startup. k3d leverages K3s (a stripped-down Kubernetes binary under 100MB) and Docker's container runtime to eliminate VM overhead entirely. Cluster creation takes seconds, not minutes. Tearing down and recreating is instant, which means your CI pipelines run faster and your local dev loop stays tight.

§03

How to use

  1. Install k3d via the official install script or your package manager (brew install k3d, curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash).
  2. Create a cluster with k3d cluster create mycluster --agents 2 to get a 3-node cluster (1 server + 2 agents).
  3. Your kubeconfig is automatically merged. Run kubectl get nodes to verify all nodes are ready.
§04

Example

# Create a cluster with a local registry and port mapping
k3d cluster create dev \
  --agents 3 \
  --port '8080:80@loadbalancer' \
  --registry-create dev-registry:5000

# Deploy an app
kubectl apply -f deployment.yaml

# Test locally via localhost:8080
curl http://localhost:8080

# Tear down when done
k3d cluster delete dev
§05

Related on TokRepo

§06

Common pitfalls

  • Docker Desktop resource limits can starve multi-node clusters. Allocate at least 4GB RAM and 2 CPUs to Docker before creating clusters with 3+ nodes.
  • Port conflicts occur when multiple clusters map the same host port. Use unique port mappings per cluster or delete old clusters first.
  • k3d clusters are ephemeral by default. Persistent volumes are lost on cluster deletion unless you mount host directories with --volume.

Frequently Asked Questions

What is the difference between k3d and k3s?+

K3s is a lightweight Kubernetes distribution by Rancher that runs as a single binary. k3d is a wrapper that runs k3s inside Docker containers, so you can create and destroy multi-node clusters without touching your host system directly. k3d manages the Docker containers; k3s is the Kubernetes runtime inside them.

Can k3d replace minikube for local development?+

Yes, for most use cases. k3d starts faster, uses fewer resources, and supports multi-node clusters natively. Minikube has broader driver support (VirtualBox, HyperKit, etc.), but if Docker is your runtime, k3d is typically faster and lighter.

Does k3d support local container registries?+

Yes. Use the --registry-create flag when creating a cluster. k3d spins up a local Docker registry and configures all cluster nodes to pull from it, which eliminates the need to push images to a remote registry during development.

How do I persist data across cluster restarts?+

Mount host directories into the cluster nodes using the --volume flag (e.g., --volume /tmp/storage:/var/lib/rancher/k3s/storage@all). Without explicit mounts, all data inside the cluster is lost when you delete it.

Can I use k3d in CI/CD pipelines?+

Yes, k3d works well in CI environments that support Docker-in-Docker or have a Docker daemon available. GitHub Actions, GitLab CI, and Jenkins all support this. Create a cluster at the start of your pipeline, run tests, and delete it when done.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets