ScriptsApr 15, 2026·3 min read

RocksDB — Facebook's Embeddable Persistent Key-Value Store

A C++ LSM-tree storage engine from Meta powering MyRocks, CockroachDB, TiKV, Kafka Streams and more. Optimized for fast SSDs and workloads with high write throughput and low read latency.

TL;DR
RocksDB is a C++ LSM-tree engine optimized for fast SSDs with high write throughput and low read latency.
§01

What it is

RocksDB is a C++ embeddable persistent key-value store developed at Meta (Facebook). It uses an LSM-tree (Log-Structured Merge-tree) architecture optimized for fast storage devices like NVMe SSDs. RocksDB powers the storage layers of CockroachDB, TiKV, Kafka Streams, and MyRocks (MySQL with RocksDB backend).

RocksDB is for database engineers and systems developers who need an embedded storage engine with high write throughput, tunable read performance, and efficient compression.

The project is actively maintained with regular releases and a growing user community. Documentation covers common use cases, and the open-source nature means you can inspect the source code, contribute fixes, and adapt the tool to your specific requirements.

§02

How it saves time or tokens

Building a production-quality storage engine from scratch takes years. RocksDB provides a battle-tested foundation with configurable compaction strategies, bloom filters, compression (LZ4, Zstd, Snappy), column families, and transaction support. Teams embed RocksDB instead of reinventing the storage layer.

§03

How to use

  1. Install RocksDB from source or via package manager.
  2. Open a database instance with configured options (block cache, compaction style, compression).
  3. Use Put, Get, Delete, and Iterator operations for key-value access.
§04

Example

#include <rocksdb/db.h>
#include <rocksdb/options.h>

int main() {
  rocksdb::DB* db;
  rocksdb::Options options;
  options.create_if_missing = true;
  options.compression = rocksdb::kLZ4Compression;

  rocksdb::Status s = rocksdb::DB::Open(options, "/tmp/testdb", &db);
  assert(s.ok());

  // Write
  db->Put(rocksdb::WriteOptions(), "key1", "value1");

  // Read
  std::string value;
  db->Get(rocksdb::ReadOptions(), "key1", &value);
  // value == "value1"

  delete db;
  return 0;
}
§05

Related on TokRepo

§06

Common pitfalls

  • Default compaction settings are tuned for general workloads. Write-heavy workloads benefit from Level compaction, while space-sensitive workloads should use Universal compaction.
  • RocksDB block cache size must be tuned to available memory. Setting it too small causes excessive disk reads; setting it too large starves the OS page cache.
  • Write amplification is inherent to LSM-trees. Monitor compaction stats and tune max_bytes_for_level_base and level0_file_num_compaction_trigger to balance write amplification against read performance.

Before adopting this tool, evaluate whether it fits your team's existing workflow. Read the official documentation thoroughly, and start with a small proof-of-concept rather than a full migration. Community forums, GitHub issues, and Stack Overflow are valuable resources when you encounter edge cases not covered in the documentation.

Frequently Asked Questions

What projects use RocksDB as their storage engine?+

CockroachDB, TiKV (the storage engine for TiDB), Apache Kafka Streams, MyRocks (MySQL), and Apache Flink all use RocksDB as their underlying storage layer. It is one of the most widely embedded storage engines in production systems.

How does RocksDB differ from LevelDB?+

RocksDB is a fork of LevelDB with significant enhancements: multi-threaded compaction, column families, transactions, bloom filters per level, pluggable compression, and rate limiting. LevelDB is simpler but lacks these production features.

What is an LSM-tree?+

A Log-Structured Merge-tree writes data to an in-memory buffer (memtable), then flushes to sorted files on disk (SSTables). Background compaction merges and sorts these files. This design optimizes for write throughput at the cost of read amplification.

Does RocksDB support transactions?+

Yes. RocksDB provides both optimistic and pessimistic transaction support via the TransactionDB and OptimisticTransactionDB APIs. Transactions guarantee atomic writes across multiple keys with snapshot isolation.

What compression algorithms does RocksDB support?+

RocksDB supports LZ4, Zstd, Snappy, zlib, and BZip2 compression. You can configure different compression algorithms per level of the LSM-tree to balance CPU usage against storage savings.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets