AI Memory
Graphiti — Temporal Knowledge Graphs for AI Agents logo

Graphiti — 面向 AI Agent 的时序知识图谱

Graphiti 从流式数据构建时序知识图谱,每条边带有效期。Agent 不仅能查"当前事实",还能查"过去某个时点的事实"。

为什么选它

Most memory systems overwrite facts when they change. Graphiti keeps the history. When a user says "I moved to Tokyo", Graphiti doesn’t delete the old "lives in Berlin" fact — it sets its valid_to timestamp and opens a new edge. Now your agent can answer "where did William live before Tokyo?" naturally.

The technical bet is that bi-temporal edges (valid_from / valid_to plus created_at / invalidated_at) are the right primitive for agent memory. It mirrors how relational data modeling handles history, adapted to the entity-relationship model that LLMs naturally produce when extracting facts from prose.

Graphiti is built by the Zep team but released as an independent library. Use it standalone when you want graph-first memory without Zep’s session framing, or embed it in larger RAG pipelines that need the temporal edge model.

Quick Start — Neo4j + Graphiti

Graphiti extracts entities and typed edges from each episode, then reconciles them against the existing graph. When the second episode says William moved to Singapore, Graphiti invalidates the old Shenzhen edge (valid_to = 2026-04-14) and creates a new Singapore edge — the history is preserved.

# pip install graphiti-core
# docker run -p 7687:7687 -e NEO4J_AUTH=neo4j/password neo4j:5
import asyncio
from datetime import datetime, timezone
from graphiti_core import Graphiti
from graphiti_core.nodes import EpisodeType

g = Graphiti(
    "bolt://localhost:7687", "neo4j", "password",
)

async def main():
    await g.build_indices_and_constraints()

    # Add episodes (raw conversations or docs) — Graphiti extracts entities + edges
    await g.add_episode(
        name="intro",
        episode_body="William is the founder of KeepRule and lives in Shenzhen.",
        source=EpisodeType.text,
        reference_time=datetime(2026, 1, 10, tzinfo=timezone.utc),
        source_description="onboarding chat",
    )
    await g.add_episode(
        name="move",
        episode_body="William just moved from Shenzhen to Singapore.",
        source=EpisodeType.text,
        reference_time=datetime(2026, 4, 14, tzinfo=timezone.utc),
        source_description="weekly standup",
    )

    # Temporal query — agents can ask "what was true at a point in time"
    hits = await g.search(
        "Where does William live?",
        center_node_uuid=None,
    )
    for edge in hits:
        print(edge.fact, "valid:", edge.valid_at, "invalid:", edge.invalid_at)

asyncio.run(main())

核心能力

Bi-temporal edges

Every edge tracks both event time (valid_from / valid_to — when the fact was true in the world) and system time (created_at / invalidated_at — when you learned it). Classic bitemporal pattern, adapted for LLM-extracted data.

Streaming ingestion

Designed to ingest episodes incrementally — conversations, documents, API events. No batch re-indexing when new data arrives. LLM-based extraction + graph merge happens per-episode.

Hybrid retrieval

Graph traversal (relationship walks, BFS/DFS) combined with vector similarity on node descriptions. Queries can be structural ("who worked with X?"), semantic, or both.

Schema flexibility

Entity and relationship types are extracted by the LLM, not declared up front. Works across domains — same library for customer support, medical records, or codebase memory.

Neo4j or FalkorDB backend

Production-grade graph databases. Graphiti does the smart extraction and merging; the DB handles storage, transactions, and query execution.

Zep integration

Zep uses Graphiti under the hood for its graph service. You can use either independently or together — Zep for managed session memory, Graphiti for custom graph-first pipelines.

对比

 Memory ModelHistoryQuery TypeBackend
GraphitithisTemporal knowledge graphFull bitemporal historyGraph + vector hybridNeo4j / FalkorDB
mem0Vector + optional graphOverwrites on updateVector similarityQdrant / Chroma / etc.
Neo4j GraphRAGStatic knowledge graphSnapshot at ingestGraph + vectorNeo4j
Microsoft GraphRAGCommunity-detected graphRebuilt on re-indexHierarchical summariesFilesystem / any DB

实际用例

01. Healthcare and compliance

Scenarios where historical facts matter legally — "what medication was the patient on in January?" Bitemporal edges are the standard pattern for regulated data, now available to LLM agents.

02. CRM and account history

Sales assistants that need to reason about relationship evolution: "who decided to expand the contract, and what was their role at the time?"

03. Engineering context

Code intelligence agents tracking which team owned which service over time, including reorgs and ownership handoffs. Simple "current owner" tables lose the pre-reorg history.

价格与许可

Graphiti: Apache 2.0 open source. No service fee — self-host alongside Neo4j Community (free) or FalkorDB (Redis-based, free). You pay for LLM extraction API calls and the graph DB infra.

Graph DB cost: Neo4j AuraDB free tier covers small graphs (~200K nodes). For production, Neo4j AuraDB Professional starts around $65/month, or self-host Neo4j Community on your own infra.

LLM extraction cost: Each episode triggers an entity/edge extraction call. ~$0.001 per episode on gpt-4o-mini. The extraction model is configurable.

相关 TokRepo 资产

常见问题

Graphiti vs Microsoft GraphRAG — how are they different?+

Microsoft GraphRAG builds a static graph from a document corpus at index time and queries hierarchical summaries. Graphiti builds an evolving temporal graph from streaming episodes and queries individual edges with time filters. Use GraphRAG for large static knowledge bases; use Graphiti for agent memory that changes over time.

Do I have to know graph databases to use Graphiti?+

No — Graphiti abstracts the Cypher/graph queries behind a Python API. You write .add_episode() and .search() calls. Knowing Neo4j helps when debugging or writing custom queries, but is not required for typical usage.

Can Graphiti replace my vector database?+

For memory, yes — Graphiti embeds node descriptions and does hybrid graph + vector search internally. For general-purpose RAG over documents, keep a separate vector DB. Graphiti is optimized for entity-centric memory, not long-form document retrieval.

Is Graphiti production-ready?+

Graphiti is used in production by the Zep team (it powers Zep’s graph service) and by a growing list of third parties listed on the GitHub README. API is stable; expect occasional migration scripts as the library matures.

What LLMs does Graphiti work with?+

OpenAI, Anthropic Claude, Google Gemini, Azure OpenAI, and local models via an OpenAI-compatible endpoint (Ollama, vLLM). Configure the LLM client in the Graphiti constructor — extraction and search use the same client by default.

同类推荐