PromptsApr 8, 2026·3 min read

AI Agent Memory Patterns — Build Agents That Remember

Design patterns for adding persistent memory to AI agents. Covers conversation memory, entity extraction, knowledge graphs, tiered memory, and memory management strategies.

TL;DR
A collection of design patterns for adding persistent memory to AI agents, covering conversation buffers, entity extraction, knowledge graphs, and tiered memory.
§01

What it is

This resource collects design patterns for adding persistent memory to AI agents. It covers the spectrum from simple conversation buffers to sophisticated knowledge graph memory, with working code examples for each pattern. The patterns include conversation buffer memory, sliding window memory, entity extraction, summary memory, knowledge graph memory, and tiered memory architectures.

The target audience is developers building AI agents who need memory beyond a single conversation turn. Whether you are using LangChain, LlamaIndex, or building from scratch, these patterns apply to any agent framework.

§02

How it saves time or tokens

Memory management is one of the hardest parts of agent development. These patterns save you from reinventing solutions for common memory problems. Sliding window memory reduces token usage by keeping only recent context. Summary memory compresses long conversations into shorter summaries. Tiered memory separates short-term and long-term storage to optimize both recall and cost.

§03

How to use

  1. Start with the simplest pattern (ConversationBufferMemory) and evaluate if it meets your needs.
  2. If conversations grow too long, switch to sliding window or summary memory to reduce token usage.
  3. For agents that need to remember facts across sessions, implement entity extraction or knowledge graph memory.
§04

Example

# Pattern 1: Conversation Buffer (simplest)
from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory()
memory.save_context(
    {'input': 'My name is Alice'},
    {'output': 'Hello Alice!'}
)
memory.save_context(
    {'input': 'I like Python'},
    {'output': 'Python is great for AI development.'}
)

# Pattern 2: Sliding Window (token-efficient)
from langchain.memory import ConversationBufferWindowMemory

memory = ConversationBufferWindowMemory(k=5)  # Keep last 5 exchanges

# Pattern 3: Entity Memory (fact extraction)
from langchain.memory import ConversationEntityMemory

memory = ConversationEntityMemory(llm=llm)
# Automatically extracts entities: Alice -> likes Python
§05

Related on TokRepo

§06

Common pitfalls

  • Conversation buffer memory grows linearly with conversation length. Without pruning, you will hit context window limits. Always set a maximum or use window/summary patterns.
  • Entity extraction memory depends on LLM quality. Low-quality models may extract incorrect or irrelevant entities, degrading agent performance over time.
  • Knowledge graph memory adds complexity and latency. Only use it when you need structured relationship queries across entities. For most chatbots, simpler patterns suffice.

Frequently Asked Questions

Which memory pattern should I start with?+

Start with ConversationBufferMemory for prototyping. It stores the full conversation history. When you hit token limits, switch to ConversationBufferWindowMemory (keeps last N exchanges) or ConversationSummaryMemory (compresses history into summaries).

How does tiered memory work?+

Tiered memory uses separate stores for different time horizons. Short-term memory holds the current conversation. Medium-term memory stores entity facts from recent sessions. Long-term memory uses a vector store or knowledge graph for persistent recall across all sessions.

Can I combine multiple memory patterns?+

Yes. LangChain's CombinedMemory lets you use multiple memory types simultaneously. For example, combine buffer memory for recent context with entity memory for persistent facts.

How do I persist memory across server restarts?+

Store memory state in a database. LangChain supports Redis, PostgreSQL, and file-based persistence. For knowledge graph memory, use Neo4j or a similar graph database as the backing store.

Does memory affect token costs?+

Yes. Every piece of memory injected into the prompt consumes tokens. Buffer memory is the most expensive as it includes the full conversation. Summary and window memory reduce costs by compressing or truncating history.

Citations (3)
🙏

Source & Thanks

References:

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.