Mem0 — Memory Layer for AI Agents
Add persistent, personalized memory to any AI agent. Learns user preferences, adapts context, reduces tokens. 51K+ stars, used by 100K+ devs.
What it is
Mem0 is an open-source memory layer for AI agents and applications. It adds persistent, personalized memory that lets AI agents learn user preferences, remember past interactions, and adapt context across conversations. Instead of starting fresh every session, your AI agent builds a growing understanding of each user.
Mem0 targets AI application developers building chatbots, copilots, and agents that need long-term context. It stores memories as structured records with metadata, supports semantic search over memories, and integrates with major LLM frameworks.
Why it saves time or tokens
Without persistent memory, every conversation must re-establish context: user preferences, past decisions, project details. This wastes tokens repeating information. Mem0 stores relevant facts from past sessions and retrieves them automatically, injecting only the relevant memories into the current context. This reduces per-session token usage while improving response quality.
How to use
- Install Mem0:
pip install mem0ai - Initialize the memory system with a storage backend
- Add memories from conversations and retrieve them for future sessions
Example
from mem0 import Memory
memory = Memory()
# Add memories from a conversation
memory.add(
'I prefer Python for backend and TypeScript for frontend',
user_id='user-123'
)
memory.add(
'My project uses PostgreSQL with Prisma ORM',
user_id='user-123'
)
# Retrieve relevant memories for a new conversation
results = memory.search(
'What tech stack should I use?',
user_id='user-123'
)
for mem in results:
print(mem['memory']) # Returns stored preferences
| Feature | Description |
|---|---|
| Persistent storage | Memories survive across sessions |
| Semantic search | Find relevant memories by meaning |
| User isolation | Per-user memory spaces |
| Auto-extraction | Extract facts from conversations |
| Multi-backend | Local, cloud, or custom storage |
Related on TokRepo
- AI memory providers — memory solutions for AI agents on TokRepo
- Mem0 deep-dive — dedicated Mem0 page on TokRepo
Common pitfalls
- Storing too many low-quality memories pollutes retrieval results; implement memory importance scoring or periodic cleanup
- Memory retrieval adds latency to each conversation turn; cache frequently accessed memories
- User privacy requires careful memory management; implement memory deletion and export capabilities for compliance
Frequently Asked Questions
RAG retrieves from static document collections. Mem0 stores dynamic memories from conversations that grow over time. RAG answers 'what does this document say?' while Mem0 answers 'what does this user prefer?' They are complementary: use RAG for knowledge bases and Mem0 for personalization.
Mem0 supports local in-memory storage for development, vector databases like Qdrant and ChromaDB for production, and managed cloud storage through the Mem0 Platform. You can also implement custom storage backends by extending the base storage class.
Yes. Mem0 is LLM-agnostic. It uses an LLM for memory extraction (identifying facts from conversations) and embedding for semantic search. You configure which LLM and embedding model to use. It works with OpenAI, Anthropic, and local models.
When you add a conversation to Mem0, it uses an LLM to extract factual statements and preferences from the text. 'I prefer dark mode and use VS Code' becomes two separate memories: 'User prefers dark mode' and 'User uses VS Code'. This structured extraction improves retrieval accuracy.
Yes. Mem0 is designed for production use with proper storage backends, user isolation, and API access. The Mem0 Platform provides a managed service with additional features like analytics and memory management dashboards. The open-source library handles the core memory operations reliably.
Citations (3)
- Mem0 GitHub— Mem0 is an open-source memory layer for AI agents
- Mem0 Docs— Mem0 provides persistent personalized memory
- Anthropic Agent Patterns— Memory in AI agent architectures
Related on TokRepo
Source & Thanks
- GitHub: mem0ai/mem0
- License: Apache 2.0
- Stars: 51,000+
- Maintainer: Mem0 AI team (Deshraj Yadav)
Thanks to Deshraj Yadav and the Mem0 team for solving one of the hardest problems in AI applications — giving agents the ability to remember, learn, and personalize over time.
Discussion
Related Assets
Kornia — Differentiable Computer Vision Library for PyTorch
Kornia is a differentiable computer vision library built on PyTorch that provides GPU-accelerated implementations of classical vision algorithms including geometric transforms, color conversions, filtering, feature detection, and augmentations, all with full autograd support for end-to-end learning.
AlphaFold — AI-Powered 3D Protein Structure Prediction
AlphaFold by Google DeepMind predicts three-dimensional protein structures from amino acid sequences with atomic-level accuracy, enabling breakthroughs in drug discovery, enzyme engineering, and structural biology research.
Flash Attention — Fast Memory-Efficient Exact Attention for Transformers
Flash Attention is a CUDA kernel library that computes exact scaled dot-product attention 2-4x faster and with up to 20x less memory than standard implementations by using IO-aware tiling to minimize GPU memory reads and writes.