ConfigsMay 15, 2026·3 min read

Virtual Kubelet — Run Kubernetes Pods on Serverless and External Backends

Virtual Kubelet implements the Kubernetes kubelet interface as a pluggable provider, allowing pods to be scheduled onto serverless platforms, IoT edge nodes, or any non-traditional compute backend.

Agent ready

This asset can be read and installed directly by agents

TokRepo exposes a universal CLI command, install contract, metadata JSON, adapter-aware plan, and raw content links so agents can judge fit, risk, and next actions.

Native · 98/100Policy: allow
Agent surface
Any MCP/CLI agent
Kind
Skill
Install
Single
Trust
Trust: Established
Entrypoint
Virtual Kubelet Overview
Universal CLI install command
npx tokrepo install 90a1ff65-5058-11f1-9bc6-00163e2b0d79

Introduction

Virtual Kubelet masquerades as a regular Kubernetes node by implementing the kubelet API. Instead of running containers on a real host, it delegates pod execution to an external provider — Azure Container Instances, AWS Fargate, HashiCorp Nomad, or custom backends. This lets clusters burst into serverless capacity or integrate with non-Kubernetes infrastructure.

What Virtual Kubelet Does

  • Registers as a node in the Kubernetes cluster and accepts pod scheduling
  • Delegates pod lifecycle management (create, update, delete) to a pluggable provider
  • Reports pod status, resource capacity, and node conditions back to the API server
  • Enables transparent cluster bursting to serverless platforms for peak workloads
  • Supports custom providers for IoT, edge computing, and batch job backends

Architecture Overview

Virtual Kubelet runs a single Go binary that implements the Kubelet gRPC API. The core library handles node registration, lease renewal, and pod status synchronization with the Kubernetes API server. Provider implementations satisfy a Go interface (CreatePod, GetPod, DeletePod, GetPodStatus) and translate Kubernetes pod specs into the target platform's execution model. Taints and tolerations ensure only intended workloads land on the virtual node.

Self-Hosting & Configuration

  • Build from source or use provider-specific container images (e.g., Azure ACI provider, AWS Fargate provider)
  • Requires a valid kubeconfig with permissions to register nodes and manage pods
  • Configure node taints to prevent accidental scheduling of non-burst workloads
  • Provider-specific settings (cloud credentials, region, resource limits) are passed via environment variables or config files
  • Node capacity is reported to the scheduler by the provider (often set to large values for serverless backends)

Key Features

  • CNCF Sandbox project with a well-defined provider interface for custom backends
  • Transparent to the Kubernetes scheduler: pods are scheduled using standard affinity, tolerations, and resource requests
  • Providers exist for Azure Container Instances, AWS Fargate, Nomad, Alibaba ECI, and more
  • Node-level resource metrics reported back to the cluster for HPA and monitoring integration
  • Mock provider included for development, testing, and CI validation

Comparison with Similar Tools

  • Karpenter — Provisions real nodes on demand; Virtual Kubelet provisions virtual nodes backed by external compute
  • Kubernetes Cluster Autoscaler — Scales node groups; does not integrate non-Kubernetes backends
  • KubeVirt — Runs VMs as pods on real nodes; Virtual Kubelet delegates pods off-cluster
  • KubeEdge — Extends Kubernetes to edge devices with a real agent; Virtual Kubelet abstracts away the agent entirely
  • Admiralty — Multi-cluster scheduling via proxy pods; similar concept but focused on cluster federation

FAQ

Q: Does Virtual Kubelet run actual containers on the host? A: No. It acts as a proxy. The actual execution happens on the external platform defined by the provider.

Q: How do I prevent regular workloads from landing on the virtual node? A: Apply a taint (e.g., virtual-kubelet.io/provider=mock:NoSchedule) and add matching tolerations only to workloads intended for the virtual node.

Q: Can I use Virtual Kubelet with Horizontal Pod Autoscaler? A: Yes. The provider reports pod and node metrics, and HPA scales pod replicas as usual. Pods scheduled on the virtual node burst into the external backend.

Q: Which providers are production-ready? A: The Azure ACI provider is the most mature and widely used. AWS Fargate, Alibaba ECI, and HashiCorp Nomad providers are also available.

Sources

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets