Scripts2026年4月2日·1 分钟阅读

Mem0 — Memory Layer for AI Agents

Add persistent, personalized memory to any AI agent. Learns user preferences, adapts context, reduces tokens. 51K+ stars, used by 100K+ devs.

TO
TokRepo精选 · Community
快速使用

先拿来用,再决定要不要深挖

这里应该同时让用户和 Agent 知道第一步该复制什么、安装什么、落到哪里。

```bash pip install mem0ai ``` ```python from mem0 import Memory m = Memory() # Add memories from conversations m.add("I prefer dark mode and use VS Code", user_id="alice") m.add("My project uses Python 3.12 with FastAPI", user_id="alice") # Search relevant memories memories = m.search("What editor does Alice use?", user_id="alice") print(memories) # [{"memory": "Prefers VS Code with dark mode", ...}] # Get all memories for a user all_memories = m.get_all(user_id="alice") ``` Also available as npm package: `npm install mem0ai`
## Introduction Mem0 (pronounced "mem-zero") is a **self-improving memory layer for AI applications**. It solves the stateless nature of LLMs by providing persistent, intelligent memory that learns from user interactions, remembers preferences, and adapts over time. Core capabilities: - **Multi-Level Memory** — Store memories at user, session, and agent levels. User preferences persist across sessions while session context stays scoped - **Intelligent Extraction** — Automatically identifies and stores relevant facts from conversations without manual tagging - **Semantic Search** — Find relevant memories using natural language queries, not just keyword matching - **Memory Consolidation** — Automatically merges related memories, resolves conflicts, and keeps the memory store clean - **Graph Memory** — Optional knowledge graph storage for complex relationship tracking between entities - **Self-Improving** — Memory quality improves over time as the system learns what information is actually retrieved and useful - **Multiple Backends** — Works with Qdrant, ChromaDB, pgvector, and other vector stores. Supports OpenAI, Anthropic, and local embeddings 51,000+ GitHub stars. Used by 100,000+ developers. Reduces token usage by 90% for returning users by loading only relevant context. ## FAQ **Q: How is Mem0 different from just using a vector database?** A: Vector databases store and retrieve embeddings, but Mem0 handles the full memory lifecycle: extraction from conversations, deduplication, conflict resolution, temporal decay, and intelligent retrieval. It's a memory *system*, not just storage. **Q: Does it work with any LLM?** A: Yes. Mem0 supports OpenAI, Anthropic Claude, Google Gemini, Mistral, and any LiteLLM-compatible provider for both memory extraction and retrieval. **Q: Where are memories stored?** A: By default, memories are stored locally using an embedded vector store. For production, you can configure external stores like Qdrant, ChromaDB, or PostgreSQL with pgvector. **Q: Can I use it with existing chat applications?** A: Yes. Mem0 integrates with LangChain, CrewAI, AutoGen, and any Python or Node.js application. Add 3-5 lines of code to any chat loop to enable memory. ## Works With - OpenAI / Anthropic / Google / local LLMs for memory extraction - Qdrant / ChromaDB / pgvector for vector storage - Neo4j for graph memory (optional) - LangChain / CrewAI / AutoGen / any agent framework - Python and Node.js SDKs
🙏

来源与感谢

- GitHub: [mem0ai/mem0](https://github.com/mem0ai/mem0) - License: Apache 2.0 - Stars: 51,000+ - Maintainer: Mem0 AI team (Deshraj Yadav) Thanks to Deshraj Yadav and the Mem0 team for solving one of the hardest problems in AI applications — giving agents the ability to remember, learn, and personalize over time.

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产