k3d — Run K3s Kubernetes Clusters Inside Docker
k3d wraps Rancher's K3s Kubernetes distribution into Docker containers, letting developers spin up ephemeral multi-node clusters in seconds for development, CI, and GitOps experimentation.
What it is
k3d is a lightweight wrapper around Rancher's K3s distribution that runs entire Kubernetes clusters inside Docker containers. Instead of provisioning VMs or paying for cloud clusters, you get a multi-node K8s environment on your laptop in under 30 seconds.
It targets developers who need fast, disposable clusters for testing Helm charts, validating manifests, running integration tests in CI, or experimenting with GitOps workflows without touching production infrastructure.
How it saves time or tokens
Traditional local Kubernetes options like minikube or kind require heavier setup and slower startup. k3d leverages K3s (a stripped-down Kubernetes binary under 100MB) and Docker's container runtime to eliminate VM overhead entirely. Cluster creation takes seconds, not minutes. Tearing down and recreating is instant, which means your CI pipelines run faster and your local dev loop stays tight.
How to use
- Install k3d via the official install script or your package manager (
brew install k3d,curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash). - Create a cluster with
k3d cluster create mycluster --agents 2to get a 3-node cluster (1 server + 2 agents). - Your kubeconfig is automatically merged. Run
kubectl get nodesto verify all nodes are ready.
Example
# Create a cluster with a local registry and port mapping
k3d cluster create dev \
--agents 3 \
--port '8080:80@loadbalancer' \
--registry-create dev-registry:5000
# Deploy an app
kubectl apply -f deployment.yaml
# Test locally via localhost:8080
curl http://localhost:8080
# Tear down when done
k3d cluster delete dev
Related on TokRepo
- DevOps tools on TokRepo — More infrastructure automation and deployment tools
- Self-hosted tools — Run your own services without cloud lock-in
Common pitfalls
- Docker Desktop resource limits can starve multi-node clusters. Allocate at least 4GB RAM and 2 CPUs to Docker before creating clusters with 3+ nodes.
- Port conflicts occur when multiple clusters map the same host port. Use unique port mappings per cluster or delete old clusters first.
- k3d clusters are ephemeral by default. Persistent volumes are lost on cluster deletion unless you mount host directories with
--volume.
Frequently Asked Questions
K3s is a lightweight Kubernetes distribution by Rancher that runs as a single binary. k3d is a wrapper that runs k3s inside Docker containers, so you can create and destroy multi-node clusters without touching your host system directly. k3d manages the Docker containers; k3s is the Kubernetes runtime inside them.
Yes, for most use cases. k3d starts faster, uses fewer resources, and supports multi-node clusters natively. Minikube has broader driver support (VirtualBox, HyperKit, etc.), but if Docker is your runtime, k3d is typically faster and lighter.
Yes. Use the --registry-create flag when creating a cluster. k3d spins up a local Docker registry and configures all cluster nodes to pull from it, which eliminates the need to push images to a remote registry during development.
Mount host directories into the cluster nodes using the --volume flag (e.g., --volume /tmp/storage:/var/lib/rancher/k3s/storage@all). Without explicit mounts, all data inside the cluster is lost when you delete it.
Yes, k3d works well in CI environments that support Docker-in-Docker or have a Docker daemon available. GitHub Actions, GitLab CI, and Jenkins all support this. Create a cluster at the start of your pipeline, run tests, and delete it when done.
Citations (3)
- k3d GitHub Repository— k3d wraps K3s into Docker containers for local Kubernetes
- K3s GitHub Repository— K3s is a lightweight Kubernetes distribution under 100MB
- Kubernetes Official Docs— Kubernetes local development tooling comparison
Related on TokRepo
Discussion
Related Assets
Hugging Face Tokenizers — Fast Text Tokenization for ML Pipelines
Hugging Face Tokenizers is a Rust-powered tokenization library with Python bindings that implements BPE, WordPiece, Unigram, and SentencePiece tokenizers with training and encoding speeds of gigabytes per second, used as the backbone for Transformers model tokenization.
Cleanlab — Find and Fix Label Errors in Any ML Dataset
Cleanlab is a data-centric AI Python library that automatically detects label errors, outliers, and data quality issues in classification and regression datasets, helping improve model accuracy by cleaning training data rather than tuning models.
Hugging Face Datasets — Access and Process ML Datasets at Scale
Hugging Face Datasets is a Python library for efficiently loading, processing, and sharing machine learning datasets with Apache Arrow-backed memory mapping, streaming support, and access to thousands of community datasets on the Hub.