Introduction
Karmada (Kubernetes Armada) was open sourced by Huawei and is now a CNCF project. It lets you run one "host" Kubernetes control plane that schedules workloads onto many "member" clusters — across clouds, regions, or on-prem sites — while each workload stays a normal Deployment, Service, or CRD.
What Karmada Does
- Provides a super-API server that aggregates Pods, Services, and CRDs across clusters.
- Schedules workloads with placement policies (label, region, taint, weight, resource fit).
- Fails over or rebalances Deployments when a cluster goes unhealthy.
- Mirrors ConfigMaps, Secrets, and CRDs to target clusters with override rules.
- Supports pull mode (edge) and push mode (central) for member-cluster control.
Architecture Overview
The Karmada control plane is a Kubernetes itself — etcd, kube-apiserver, scheduler, controller-manager, plus Karmada-specific components (karmada-webhook, karmada-aggregated-apiserver, karmada-scheduler, karmada-controller-manager). Member clusters run a tiny agent (in pull mode) or are reached via kubeconfig (push mode). Propagation policies resolve which clusters get which objects.
Self-Hosting & Configuration
kubectl karmada initproduces a ready control plane on any kubeadm cluster.- Helm charts and the Karmada Operator cover production deployments.
- Store cluster kubeconfigs as Secrets; rotate automatically with cert-manager.
- Use PropagationPolicy and ClusterPropagationPolicy for namespace vs. cluster scope.
- OverridePolicy rewrites image tags, resources, or args per target cluster.
Key Features
- Native kubectl experience across dozens of clusters.
- Declarative cross-cluster service discovery via MCS (Multi-Cluster Services).
- Differential config per cluster through strategic-merge or JSON-patch overrides.
- Works with any CNCF-conformant Kubernetes — EKS, GKE, AKS, OpenShift, k3s.
- Pluggable scheduling: priority, capacity-aware, CPU/GPU topology-aware.
Comparison with Similar Tools
- Kubefed v2 — archived predecessor; Karmada is considered its spiritual successor.
- Cluster API — cluster lifecycle; Karmada is workload lifecycle across clusters.
- Rancher Fleet — GitOps-first multi-cluster; Karmada is API-first.
- Google Anthos / Azure Arc — managed multi-cluster, vendor-locked.
- OpenShift ACM — Red Hat multi-cluster with a GUI; Karmada is upstream and free.
FAQ
Q: Do pods move between clusters automatically? A: Deployments can reschedule on cluster failure; individual pods do not migrate because Kubernetes pods are not live-migratable.
Q: How is cross-cluster networking solved? A: Karmada handles orchestration; pair it with Submariner, Cilium Cluster Mesh, or Istio multicluster for the data plane.
Q: Can I mix K8s versions? A: Yes, within two minor versions of the control plane per project guidance.
Q: CNCF status? A: Incubating as of 2023 — used in production by Huawei Cloud, ICBC, and others.