Scripts2026年4月15日·1 分钟阅读

MetalLB — Load Balancer Implementation for Bare-Metal Kubernetes

MetalLB is a network load-balancer implementation for bare-metal Kubernetes clusters using standard routing protocols (Layer 2 ARP/NDP or BGP). It lets Services of type LoadBalancer work on clusters that do not run on a cloud provider.

Introduction

Outside a cloud, Kubernetes Services of type LoadBalancer stay in the Pending state because there is no cloud controller to allocate an external IP. MetalLB fills that gap with a small set of Kubernetes-native components that advertise Service IPs to the local network either by ARP/NDP or BGP, giving on-prem clusters the same ergonomics as EKS or GKE.

What MetalLB Does

  • Allocates IPs from administrator-defined pools to LoadBalancer Services
  • Advertises those IPs using Layer 2 (ARP/NDP) or Layer 3 (BGP)
  • Integrates with your existing network routers for session-aware failover
  • Supports IPv4 and IPv6 pools, including dual-stack Services
  • Runs entirely as pods; no changes to the cluster data plane are required

Architecture Overview

MetalLB ships two Deployments: a controller that watches Services and assigns IPs, and a DaemonSet speaker that advertises assigned IPs from each node. In L2 mode, one speaker becomes the elected "owner" of a Service IP and responds to ARP/NDP queries. In BGP mode, every speaker peers with upstream routers so traffic can be ECMP-balanced across all nodes. Configuration is entirely CRD-driven (IPAddressPool, L2Advertisement, BGPAdvertisement, BGPPeer).

Self-Hosting & Configuration

  • Install via FRRouting-based manifests for BGP or native manifests for L2-only
  • Define IPAddressPool resources per subnet or tenant
  • Use BGPPeer + BGPAdvertisement for advanced routing with communities and local-pref
  • Configure memberlist secret to prevent cross-cluster speaker collisions
  • Validate with kubectl describe service — the LoadBalancer Ingress field should show the allocated IP

Key Features

  • Two modes: L2 (zero-config, single active node) and BGP (ECMP across all nodes)
  • IPv6 and dual-stack support
  • Fine-grained pool scoping via namespace and Service selectors
  • FRR integration unlocks BFD, communities and unnumbered peering
  • Prometheus metrics and events for every allocation and advertisement

Comparison with Similar Tools

  • kube-vip — combines VIP + control-plane HA; simpler but less BGP-friendly
  • PureLB — MetalLB fork with a different allocator model
  • Cilium LB IPAM — built into Cilium; requires the Cilium data plane
  • Calico LoadBalancer — ships with Calico BGP; overlaps with MetalLB BGP mode
  • External hardware LBs (F5, A10) — powerful but expensive and out-of-cluster

FAQ

Q: Does L2 mode balance traffic across nodes? A: No. In L2 mode a single node answers ARP for each VIP. For active/active balancing use BGP.

Q: Can I mix IPv4 and IPv6 pools? A: Yes. Create separate pools per family and MetalLB will assign a VIP to each Service that requests it.

Q: What happens if I run MetalLB alongside a cloud controller? A: Typically the cloud provider handles the LoadBalancer type; you can set loadBalancerClass so MetalLB only touches Services it owns.

Q: Is BGP mode production ready? A: Yes, including FRR-based peering, BFD and multi-hop. Many telcos and on-prem shops run it in production.

Sources

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产