AI Agent Memory Patterns — Build Agents That Remember
Design patterns for adding persistent memory to AI agents. Covers conversation memory, entity extraction, knowledge graphs, tiered memory, and memory management strategies.
What it is
This resource collects design patterns for adding persistent memory to AI agents. It covers the spectrum from simple conversation buffers to sophisticated knowledge graph memory, with working code examples for each pattern. The patterns include conversation buffer memory, sliding window memory, entity extraction, summary memory, knowledge graph memory, and tiered memory architectures.
The target audience is developers building AI agents who need memory beyond a single conversation turn. Whether you are using LangChain, LlamaIndex, or building from scratch, these patterns apply to any agent framework.
How it saves time or tokens
Memory management is one of the hardest parts of agent development. These patterns save you from reinventing solutions for common memory problems. Sliding window memory reduces token usage by keeping only recent context. Summary memory compresses long conversations into shorter summaries. Tiered memory separates short-term and long-term storage to optimize both recall and cost.
How to use
- Start with the simplest pattern (ConversationBufferMemory) and evaluate if it meets your needs.
- If conversations grow too long, switch to sliding window or summary memory to reduce token usage.
- For agents that need to remember facts across sessions, implement entity extraction or knowledge graph memory.
Example
# Pattern 1: Conversation Buffer (simplest)
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
memory.save_context(
{'input': 'My name is Alice'},
{'output': 'Hello Alice!'}
)
memory.save_context(
{'input': 'I like Python'},
{'output': 'Python is great for AI development.'}
)
# Pattern 2: Sliding Window (token-efficient)
from langchain.memory import ConversationBufferWindowMemory
memory = ConversationBufferWindowMemory(k=5) # Keep last 5 exchanges
# Pattern 3: Entity Memory (fact extraction)
from langchain.memory import ConversationEntityMemory
memory = ConversationEntityMemory(llm=llm)
# Automatically extracts entities: Alice -> likes Python
Related on TokRepo
- AI Memory Providers — Memory frameworks and providers for AI agents
- Agent Tools — AI agent frameworks and tools
Common pitfalls
- Conversation buffer memory grows linearly with conversation length. Without pruning, you will hit context window limits. Always set a maximum or use window/summary patterns.
- Entity extraction memory depends on LLM quality. Low-quality models may extract incorrect or irrelevant entities, degrading agent performance over time.
- Knowledge graph memory adds complexity and latency. Only use it when you need structured relationship queries across entities. For most chatbots, simpler patterns suffice.
Frequently Asked Questions
Start with ConversationBufferMemory for prototyping. It stores the full conversation history. When you hit token limits, switch to ConversationBufferWindowMemory (keeps last N exchanges) or ConversationSummaryMemory (compresses history into summaries).
Tiered memory uses separate stores for different time horizons. Short-term memory holds the current conversation. Medium-term memory stores entity facts from recent sessions. Long-term memory uses a vector store or knowledge graph for persistent recall across all sessions.
Yes. LangChain's CombinedMemory lets you use multiple memory types simultaneously. For example, combine buffer memory for recent context with entity memory for persistent facts.
Store memory state in a database. LangChain supports Redis, PostgreSQL, and file-based persistence. For knowledge graph memory, use Neo4j or a similar graph database as the backing store.
Yes. Every piece of memory injected into the prompt consumes tokens. Buffer memory is the most expensive as it includes the full conversation. Summary and window memory reduce costs by compressing or truncating history.
Citations (3)
- LangChain Documentation— LangChain memory modules for conversation, entity, and summary memory
- Anthropic Research— AI agent memory architecture patterns
- LlamaIndex Memory Docs— Knowledge graph memory for AI agents
Related on TokRepo
Source & Thanks
References:
- Mem0 — 25k+ stars
- Letta — 12k+ stars
- Zep — 3k+ stars
- LangChain Memory