AI Memory
Cognitive Weaver — Experimental Agent Memory Architecture logo

Cognitive Weaver — 实验性 Agent 记忆架构

Cognitive Weaver 是偏研究向的记忆库,探索反思式记忆整合:Agent 定期审视并重写自己的记忆,而非仅仅存储。

为什么选它

Most memory libraries are write-mostly: extract facts, embed, retrieve. Cognitive Weaver adds a reflection loop — periodically, an LLM reviews recent memories, consolidates duplicates, promotes important facts, and demotes stale ones. The idea is borrowed from sleep-consolidation theories in cognitive science.

It’s early-stage research code, not a production-hardened library. You should consider it when: (a) you have an agent that runs for months and accumulates contradictory facts, (b) you need to experiment with memory-quality research, or (c) you want to see how reflection-based memory differs from extraction-based memory in practice.

If you’re shipping a product, use mem0 or Zep instead. Revisit Cognitive Weaver when your retrieval quality plateaus and you suspect memory consolidation is the missing piece.

Quick Start — Reflection Loop Pattern

The exact Cognitive Weaver API is experimental and changes between versions. Treat the code above as the pattern: keep raw memories in a vector store, run a separate reflection pass on a cadence, and let consolidation produce higher-signal memories over time.

# Cognitive Weaver's core idea, implemented against any vector store.
# Treat this as a pattern you can replicate — the repo is experimental.

from datetime import datetime, timedelta
# from cognitive_weaver import MemoryStore, Reflector  # experimental API

store = MemoryStore()          # vector-backed store of raw memories
reflector = Reflector(         # LLM-driven consolidation pass
    llm="gpt-4o-mini",
    window_hours=24,
)

# Normal path — write memories as they arrive
store.write(user_id="u1", text="User asked about Go vs Rust.")
store.write(user_id="u1", text="User prefers Rust for systems work.")
store.write(user_id="u1", text="User mentioned Rust again, with Axum framework.")

# Reflection pass — runs periodically (cron, background worker, etc.)
def nightly_consolidation(user_id: str):
    recent = store.read_since(user_id, since=datetime.utcnow() - timedelta(days=1))
    consolidated = reflector.consolidate(recent)  # LLM merges and scores
    for memory in consolidated:
        store.upsert(user_id=user_id, text=memory.text, importance=memory.score)
    # Low-importance memories decay (soft-delete on next read)

nightly_consolidation("u1")

# Later: a single higher-quality memory appears in retrieval
hits = store.search("what language does the user prefer?", user_id="u1")
# → "User consistently prefers Rust, especially for web services with Axum."

核心能力

Reflection-based consolidation

LLM-driven review of recent memories. Merges duplicates, promotes important facts, demotes noise. Runs as a background job — not every request.

Importance scoring

Each memory carries an importance score adjusted during reflection. Retrieval biases toward high-importance memories, accelerating convergence on "what really matters".

Memory decay

Low-importance memories can be soft-deleted (or archived to cold storage) after a configurable age — keeping the hot index lean.

Pluggable store

Any vector DB you already use. Cognitive Weaver is the reflection layer on top, not a new storage engine.

Observability hooks

Before/after snapshots of every reflection pass are emitted as structured events — makes it possible to audit what the agent "rewrote" during consolidation.

Research-friendly

The codebase is small and readable — ~1.5K LOC in Python. Forkable as a starting point for custom memory research.

对比

 MaturityUnique IdeaProduction Use?Best for
Cognitive WeaverthisExperimentalReflection-based consolidationNot recommended yetResearch & experimentation
mem0StableAutomatic extraction + dedupYesProduction chatbots
LettaStableAgent-directed pagingYesLong-running agents
GraphitiStableTemporal bitemporal edgesYesHistory-sensitive domains

实际用例

01. Research on memory architectures

If you are writing a paper or running experiments on LLM memory quality, Cognitive Weaver’s small surface area makes it easy to modify and measure.

02. Prototyping reflection in production systems

Borrow the pattern (nightly consolidation pass against your existing memory store) even if you don’t adopt the library. Most production stacks benefit from periodic memory review.

03. Agents with heavy drift

When an agent runs for months and starts surfacing stale or contradictory memories, reflection-based consolidation is one of the few known countermeasures.

价格与许可

Open source: check the repo for the specific license; most experimental memory projects publish under MIT or Apache 2.0.

Cost: you pay for the reflection LLM calls. A daily consolidation pass over 1K memories on gpt-4o-mini costs ~$0.10. Scale linearly with memory volume and reflection frequency.

Alternative without adopting the library: implement the pattern yourself in ~200 lines. The research value is in the idea, not the specific codebase.

常见问题

Is Cognitive Weaver production-ready?+

No — treat it as a research project. For production use mem0, Zep, or Letta. Cognitive Weaver is valuable when you want to experiment with reflection-based memory or fork the code for research.

How is reflection different from deduplication?+

Deduplication removes near-identical memories. Reflection goes further: it can merge related-but-not-identical memories into a single higher-signal summary, re-rank importance, and prune outdated facts. It’s a semantic operation, not just text matching.

Can I add reflection to mem0 or Zep?+

Yes. Run a scheduled job that queries recent memories, asks an LLM to consolidate them, and writes back the result. mem0’s graph memory feature already does something similar for entity relationships; adding fact-level reflection is a clean extension.

What problems does reflection NOT solve?+

Hallucination (if the source is wrong, consolidation amplifies it), retrieval accuracy bottlenecks (hybrid search fixes those), and latency (reflection is offline anyway). Use it when memory quality degrades over time — not as a general-purpose fix.

Where can I learn more about reflection in AI agents?+

Start with the Reflexion paper (Shinn et al., 2023) for agent self-improvement, and the Voyager paper (Wang et al., 2023) for lifelong learning with a skill library. Cognitive Weaver sits in the same research lineage.

同类推荐