# TiKV — Distributed Transactional Key-Value Store on Raft > A CNCF-graduated distributed key-value store written in Rust that powers TiDB. Provides horizontal scaling, strong consistency via Raft, geo-replication, and ACID transactions with Percolator-style MVCC. ## Install Save as a script file and run: # TiKV — Distributed Transactional Key-Value Store on Raft ## Quick Use ```bash # Install TiUP (PingCAP's cluster manager) curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh source ~/.bashrc # Single-node TiKV + PD for local dev tiup playground --mode tikv-slim # Hit it with the raw client tiup ctl:nightly tikv --pd 127.0.0.1:2379 --host 127.0.0.1:20160 store ``` ```python # Python raw client from tikv_client import RawClient c = RawClient.connect(["127.0.0.1:2379"]) c.put(b"hello", b"world") print(c.get(b"hello")) ``` ## Introduction TiKV is a distributed, transactional, strongly-consistent key-value store written in Rust and originally built to back TiDB. It replicates regions of ordered keys via Raft, provides ACID transactions with a Google Percolator-style MVCC layer, and scales horizontally by splitting and rebalancing regions automatically. ## What TiKV Does - Stores ordered byte keys across many nodes with auto-sharding into 96 MB regions. - Replicates each region with Raft for strong consistency and automatic failover. - Exposes two APIs: raw KV (single-key ops) and transactional KV (multi-key Percolator MVCC). - Lets the Placement Driver (PD) schedule region splits, merges and replica placement for balance. - Integrates with Prometheus, Grafana and jaeger-style tracing for deep observability. ## Architecture Overview Clients talk to PD to discover which TiKV node owns the region for a key, then to that node directly. Each region is a Raft group with 3+ replicas; the leader appends entries to a local RocksDB instance (RaftDB for log, KvDB for data). Writes become Raft log entries replicated to a quorum before applying. PD continuously monitors load and issues split, merge, leader-transfer and replica-move operations to keep the cluster balanced. ## Self-Hosting & Configuration - Deploy with TiUP, Kubernetes Operator (`tidb-operator`), or ansible for bare metal. - Size 3 PD + 3+ TiKV nodes minimum; TiKV loves NVMe SSDs and 10 GbE networking. - Tune `storage.block-cache.capacity` to ~45% of RAM on dedicated nodes. - Use placement rules to pin replicas to AZs or racks for geo-distribution. - Enable TLS between PD, TiKV and clients for production deployments. ## Key Features - Snapshot-isolation and repeatable-read transactions across thousands of nodes. - Online schema and range changes without locking, inherited by TiDB. - Native support for async commit and 1PC optimisations that cut latency. - Coprocessor framework pushes filters/aggregates down to the storage layer. - Multi-Raft architecture scales to PB-class clusters in production at PingCAP users. ## Comparison with Similar Tools - **etcd** — also Raft-based KV but single-group, small-data focus. - **CockroachDB KV layer** — very similar design; TiKV is a standalone product plus TiDB SQL. - **FoundationDB** — ACID KV with different transaction model; harder to operate. - **Cassandra / ScyllaDB** — eventual-consistency wide-column; different consistency trade-off. - **DynamoDB** — managed AWS equivalent; TiKV is self-hosted and open source. ## FAQ **Q:** Do I need TiDB to use TiKV? A: No — TiKV is a standalone KV store with Rust, Go, Java, Python and Node.js clients. **Q:** How does TiKV scale writes? A: Regions split at size/QPS thresholds and PD rebalances leaders across nodes. **Q:** What's the transaction model? A: Percolator 2PC with MVCC timestamps from PD, plus async commit for lower latency. **Q:** Is TiKV production-ready? A: Yes — CNCF graduated in 2020 and runs at PB scale at companies like Zhihu and Square. ## Sources - https://github.com/tikv/tikv - https://tikv.org/docs/ --- Source: https://tokrepo.com/en/workflows/4772853c-3920-11f1-9bc6-00163e2b0d79 Author: Script Depot