Mem0 — Memory Layer for AI Applications
Add persistent, personalized memory to AI agents and assistants. Mem0 stores user preferences, past interactions, and learned context across sessions.
What it is
Mem0 (pronounced 'memo') is a memory infrastructure layer for AI applications. It automatically extracts, stores, and retrieves relevant facts and preferences from conversations, giving AI agents persistent context across sessions without manual prompt engineering.
The project suits developers building chatbots, AI assistants, or agent systems that need to remember user preferences, project context, or learned behaviors between sessions.
How it saves time or tokens
Without persistent memory, developers stuff prior conversation history into every prompt, burning tokens on repetitive context. Mem0 replaces this pattern with a search-based retrieval system: only relevant memories are fetched per query, reducing prompt size and improving response accuracy.
How to use
- Install Mem0 with
pip install mem0ai. - Initialize the Memory object and add facts using
m.add('fact', user_id='user'). - Search memories with
m.search('query', user_id='user')to retrieve relevant context for your prompts.
Example
from mem0 import Memory
m = Memory()
# Store user preferences
m.add('I prefer Python over JavaScript', user_id='alice')
m.add('My project uses PostgreSQL and Redis', user_id='alice')
# Retrieve relevant memories
results = m.search('what database does alice use?', user_id='alice')
print(results)
# [{'memory': 'Uses PostgreSQL and Redis', 'score': 0.95}]
# Multi-level memory: user, agent, and session scopes
m.add('Dark mode preferred', user_id='alice')
m.add('Always respond in bullet points', agent_id='support-bot')
Related on TokRepo
- AI Memory Providers -- compare memory systems like Mem0, Zep, MemGPT, and Letta
- Mem0 Deep-Dive -- detailed breakdown of Mem0 features and integration patterns
Common pitfalls
- Do not store entire conversation transcripts as memories. Mem0 works best with extracted facts and preferences, not raw text dumps.
- Memory search is semantic, not keyword-based. Queries like 'database' will match 'PostgreSQL' even without exact word overlap.
- For production use, configure a persistent vector store backend. The default in-memory store loses data on restart.
Frequently Asked Questions
Chat history stores raw messages chronologically. Mem0 extracts discrete facts and preferences, stores them as semantic vectors, and retrieves only relevant ones per query. This reduces token usage and improves response quality compared to stuffing full history into prompts.
Mem0 supports multiple vector store backends including Qdrant, Chroma, Pinecone, and an in-memory default for development. You configure the backend when initializing the Memory object.
Yes. Mem0 supports user-level, agent-level, and session-level memory scopes. Each memory is tagged with identifiers so different users and agents maintain separate memory spaces.
Mem0 is LLM-agnostic. It stores and retrieves memories independently of which model you use. You can pair it with OpenAI, Anthropic, local models, or any other provider.
Yes. Mem0 is open source and available on GitHub. There is also a managed cloud offering for teams that want hosted memory infrastructure without managing vector stores.
Citations (3)
- Mem0 GitHub— Mem0 provides persistent personalized memory for AI applications
- Mem0 Docs— Supports user-level, agent-level, and session-level memory
- Mem0 Search Docs— Vector-based semantic search for memory retrieval
Related on TokRepo
Source & Thanks
- GitHub: mem0ai/mem0 (25k+ stars)
- Docs: docs.mem0.ai