ScriptsApr 20, 2026·3 min read

Memcached — High-Performance Distributed Memory Caching System

Memcached is a free, open-source, high-performance distributed memory object caching system used to speed up dynamic web applications by reducing database load.

Introduction

Memcached is a general-purpose distributed memory caching system originally developed by Brad Fitzpatrick for LiveJournal in 2003. It reduces the number of times an external data source must be read by caching data and objects in RAM, dramatically improving response times for database-driven applications.

What Memcached Does

  • Caches database query results, API responses, and session data in RAM for sub-millisecond retrieval
  • Distributes cached data across multiple servers using consistent hashing
  • Provides a simple key-value protocol accessible from virtually any programming language
  • Automatically evicts least-recently-used items when memory limits are reached
  • Supports atomic increment/decrement operations for counters and rate limiters

Architecture Overview

Memcached uses a client-server architecture where one or more memcached daemons listen for connections on a configurable port. Each server operates independently with no inter-node communication. The client library implements the distribution logic, hashing keys to determine which server stores each item. Internally, memcached uses a slab allocator to manage memory efficiently, grouping items of similar sizes into slabs to minimize fragmentation. The LRU eviction policy runs per slab class.

Self-Hosting & Configuration

  • Install via package manager (apt, yum, brew) or compile from source
  • Configure memory allocation with -m flag (default 64 MB)
  • Set listen address with -l and port with -p (default 11211)
  • Enable SASL authentication for environments requiring access control
  • Use -t to set the number of worker threads (default 4)

Key Features

  • Sub-millisecond latency for cached reads and writes
  • Linear horizontal scaling by adding more servers to the pool
  • Protocol support for both text and binary formats
  • Multi-threaded architecture utilizing multiple CPU cores
  • Mature ecosystem with client libraries for 30+ programming languages

Comparison with Similar Tools

  • Redis — offers richer data structures and persistence, but Memcached is simpler and can be faster for pure key-value caching
  • DragonflyDB — modern multi-threaded alternative with Redis/Memcached protocol compatibility
  • KeyDB — multithreaded Redis fork with active replication
  • Hazelcast — distributed computing platform with caching, adds JVM dependency
  • Varnish — HTTP-level cache for web content rather than application-level object caching

FAQ

Q: When should I use Memcached over Redis? A: Memcached excels when you need simple key-value caching with multi-threaded performance and minimal memory overhead. Choose Redis when you need data structures, persistence, or pub/sub.

Q: Does Memcached persist data to disk? A: No. Memcached is purely in-memory. If a server restarts, all cached data is lost. Use it as a cache layer, not a primary data store.

Q: How does Memcached handle server failures? A: Client libraries automatically route requests to remaining servers. The consistent hashing algorithm minimizes cache misses when the server pool changes.

Q: What is the maximum item size? A: The default maximum value size is 1 MB, configurable with the -I flag up to 128 MB, though large values reduce caching efficiency.

Sources

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets