ConfigsApr 15, 2026·3 min read

BadgerDB — Fast Embeddable Key-Value Store in Pure Go

A pure-Go LSM-tree database built for SSDs, the storage engine behind Dgraph. No CGO, ACID transactions, encryption at rest, tiny binary — ideal for Go services that need embedded persistence.

TL;DR
BadgerDB is a pure-Go LSM-tree key-value store with ACID transactions, built for SSD workloads.
§01

What it is

BadgerDB is a high-performance embeddable key-value database written in pure Go. It uses an LSM-tree (Log-Structured Merge-tree) architecture optimized for SSDs, separating keys from values to reduce write amplification. BadgerDB powers the Dgraph graph database as its storage engine.

The target audience includes Go developers who need an embedded database without CGO dependencies, systems engineers building local storage layers, and projects that require ACID transactions with encryption at rest in a single binary.

§02

How it saves time or tokens

BadgerDB eliminates the need for external database processes. Your Go application embeds the database directly, removing network overhead, connection pooling, and deployment complexity. A key-value store that would require deploying and managing Redis or RocksDB becomes a single go get and a few lines of code.

The pure-Go implementation means cross-compilation works out of the box. No CGO, no system library dependencies, no build flags -- just go build.

§03

How to use

  1. Install BadgerDB:
go get github.com/dgraph-io/badger/v4
  1. Open a database and perform basic operations:
package main

import (
    "fmt"
    "log"
    badger "github.com/dgraph-io/badger/v4"
)

func main() {
    opts := badger.DefaultOptions("/tmp/badger")
    db, err := badger.Open(opts)
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    // Write
    err = db.Update(func(txn *badger.Txn) error {
        return txn.Set([]byte("hello"), []byte("world"))
    })

    // Read
    db.View(func(txn *badger.Txn) error {
        item, _ := txn.Get([]byte("hello"))
        item.Value(func(val []byte) error {
            fmt.Printf("Value: %s\n", val)
            return nil
        })
        return nil
    })
}
  1. Run the application:
go run main.go
§04

Example

Batch writes with a transaction:

wb := db.NewWriteBatch()
for i := 0; i < 1000; i++ {
    key := fmt.Sprintf("key-%d", i)
    val := fmt.Sprintf("value-%d", i)
    wb.Set([]byte(key), []byte(val))
}
wb.Flush()
§05

Related on TokRepo

§06

Common pitfalls

  • BadgerDB stores data in a directory on disk. Not setting the directory path or using a tmpfs path in production causes data loss on restart.
  • The value log requires periodic garbage collection. Call db.RunValueLogGC(0.5) in a background goroutine to reclaim space from deleted keys.
  • Large values (over 1MB) should use the WithValueThreshold option to control whether values are stored inline or in the value log.
  • BadgerDB is designed for SSDs. HDD performance degrades significantly due to random read patterns during compaction.
  • In-memory mode (WithInMemory(true)) is useful for testing but loses all data when the process exits.

Frequently Asked Questions

How does BadgerDB compare to BoltDB?+

BoltDB uses a B+ tree and is optimized for read-heavy workloads. BadgerDB uses an LSM tree and is optimized for write-heavy workloads on SSDs. BadgerDB supports concurrent reads and writes, while BoltDB allows only one writer at a time. Choose based on your read/write ratio.

Does BadgerDB support encryption?+

Yes. BadgerDB supports encryption at rest using AES. Enable it by setting the encryption key in the options: opts.WithEncryptionKey(key). The key must be 16, 24, or 32 bytes for AES-128, AES-192, or AES-256 respectively.

Can BadgerDB handle large datasets?+

BadgerDB can handle datasets larger than RAM by keeping keys in memory and values on disk. The key-value separation design means memory usage scales with the number of keys, not the total data size. For datasets with billions of keys, monitor memory usage.

Is BadgerDB thread-safe?+

Yes. BadgerDB is designed for concurrent access from multiple goroutines. Transactions provide isolation. Multiple read transactions can run concurrently, and write transactions are serialized internally.

Does BadgerDB support TTL for keys?+

Yes. Set a time-to-live on entries using txn.SetEntry(badger.NewEntry(key, value).WithTTL(time.Hour)). Expired entries are cleaned up during garbage collection.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets