Cette page est affichée en anglais. Une traduction française est en cours.
KnowledgeApr 7, 2026·2 min de lecture

Zep — Long-Term Memory for AI Agents and Assistants

Production memory layer for AI assistants. Zep stores conversation history, extracts facts, builds knowledge graphs, and provides temporal-aware retrieval for LLMs.

What is Zep?

Zep is a production memory layer for AI assistants and agents. It goes beyond simple conversation history — Zep automatically extracts facts, builds user knowledge graphs, detects temporal context (when things happened), and provides intelligent retrieval so your AI remembers and reasons about past interactions.

Answer-Ready: Zep is a production memory layer for AI agents that automatically extracts facts from conversations, builds knowledge graphs, and provides temporal-aware retrieval. Used in production by enterprise AI teams. Works with any LLM framework.

Best for: AI teams building assistants that need to remember users across sessions. Works with: LangChain, LlamaIndex, OpenAI, Anthropic, any framework. Setup time: Under 5 minutes.

Core Features

1. Automatic Fact Extraction

# User says: "I'm a software engineer at Google working on search"
# Zep extracts:
# - Fact: "User is a software engineer"
# - Fact: "User works at Google"
# - Fact: "User works on search"
# All timestamped and linked to user entity

2. Knowledge Graphs

Zep builds entity-relationship graphs from conversations:

[User] --works_at--> [Google]
[User] --role--> [Software Engineer]
[User] --works_on--> [Search]
[Google] --is_a--> [Tech Company]

3. Temporal Awareness

# Zep understands time context
# March: "I use React for frontend"
# June: "I switched to Vue"
# Query in July: "What framework does user use?"
# Answer: "Vue" (Zep knows the React fact is outdated)

4. Dialog Classification

Automatically classifies conversation segments:

classifiers = await client.memory.get_session_classifiers(session_id="123")
# {"sentiment": "positive", "topic": "technical_support", "intent": "troubleshooting"}

5. Hybrid Search

# Search across all sessions for a user
results = await client.memory.search(
    session_id="session-123",
    text="deployment issues",
    search_type="mmr",  # Maximal Marginal Relevance
    limit=5,
)

6. Framework Integrations

# LangChain
from langchain_community.memory import ZepMemory
memory = ZepMemory(session_id="123", url="http://localhost:8000")

# LlamaIndex
from llama_index.storage.chat_store.zep import ZepChatStore

# Direct API
from zep_python.client import AsyncZep

Architecture

Conversations → Zep Server
                    ↓
            ┌───────┴───────┐
            │ Fact Extractor │ → Facts DBGraph Builder  │ → Knowledge Graph
            │ Classifier     │ → Session Metadata
            │ Embedder       │ → Vector Index
            └───────────────┘
                    ↓
            Intelligent Retrieval
                    ↓
            Your AI Agent

Deployment

# Docker (self-hosted)
docker compose up -d

# Or use Zep Cloud (managed)
# Sign up at zep.ai

FAQ

Q: How does Zep compare to Mem0? A: Zep focuses on production features — knowledge graphs, temporal awareness, dialog classification. Mem0 is simpler with basic memory storage. Zep is more enterprise-oriented.

Q: Can I self-host? A: Yes, open-source Community Edition available. Enterprise Edition adds advanced features.

Q: Does it handle PII? A: Zep Cloud offers PII detection and redaction. Self-hosted users can add their own pipeline.

🙏

Source et remerciements

Created by Zep AI. Licensed under Apache 2.0.

getzep/zep — 3k+ stars

Discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.