Configs2026年4月11日·1 分钟阅读

etcd — Distributed Reliable Key-Value Store for Critical Data

etcd is a strongly consistent, distributed key-value store for configuration, service discovery, and coordination. Uses the Raft consensus algorithm. Powers Kubernetes, OpenShift, CoreOS, and many other distributed systems.

AI Open Source
AI Open Source · Community
介绍

etcd is a strongly consistent, distributed key-value store providing a reliable way to store data across a cluster of machines. It uses the Raft consensus algorithm for leader election and log replication. Originally built for CoreOS; now a graduated CNCF project powering Kubernetes as its primary datastore.

What etcd Does

  • Distributed KV — reliable storage with strong consistency
  • Raft consensus — leader election and log replication
  • Watches — subscribe to key changes in real time
  • Leases — TTL-based key expiration
  • Transactions — atomic multi-key compare-and-swap
  • MVCC — multi-version concurrency control, historic reads
  • gRPC API — binary protocol for performance
  • Dynamic reconfiguration — add/remove members at runtime

Architecture

A cluster of 3 or 5 etcd members runs the Raft protocol. One is elected leader, others are followers. Writes go through the leader and are replicated. Reads can be linearizable (through leader) or serializable (local). Key revisions use MVCC.

Self-Hosting

# 3-node cluster
etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.10:2380 \
     --listen-peer-urls http://10.0.1.10:2380 \
     --listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
     --advertise-client-urls http://10.0.1.10:2379 \
     --initial-cluster-token etcd-cluster-1 \
     --initial-cluster infra1=http://10.0.1.10:2380,infra2=http://10.0.1.11:2380,infra3=http://10.0.1.12:2380 \
     --initial-cluster-state new

Key Features

  • Strong consistency (Raft)
  • Watches for change notifications
  • Lease TTLs
  • MVCC historical reads
  • Transactions (compare-and-swap)
  • Role-based access control
  • TLS encryption
  • Dynamic cluster reconfiguration
  • gRPC client protocol

Comparison

Store Consistency Use Case
etcd Strong (Raft) Config, K8s state
Consul Strong (Raft) Service discovery + KV
ZooKeeper Strong (ZAB) Coordination (legacy)
Redis Eventual (cluster) Cache, queues
TiKV Strong (Raft) Transactional KV store

FAQ

Q: How many cluster nodes? A: An odd number (3, 5, 7) to form a majority. 3 nodes tolerate 1 failure, 5 tolerate 2. More nodes increase write latency.

Q: etcd vs Consul? A: etcd focuses on KV + strong consistency; Consul is broader (service discovery, health checks, service mesh). K8s uses etcd because it only needs reliable KV storage.

Q: Data size limit? A: 2GB by default, tunable up to 8GB. If you exceed that, split into namespaces or use another solution. etcd is not a general-purpose database.

Sources

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产