Cette page est affichée en anglais. Une traduction française est en cours.
KnowledgeMay 12, 2026·2 min de lecture

Awesome Agent Memory — Long-Term Context Index

Awesome Agent Memory curates systems, benchmarks, and papers on long-term context for LLMs/MLLMs—use it to compare approaches and pick tools to try.

Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Native · 96/100Policy : autoriser
Surface agent
Tout agent MCP/CLI
Type
Prompt
Installation
Manual
Confiance
Confiance : Established
Point d'entrée
README.md
Commande CLI universelle
npx tokrepo install be6dfe8e-975e-5ade-9900-72221c32ab40
Introduction

Awesome Agent Memory curates systems, benchmarks, and papers on long-term context for LLMs/MLLMs—use it to compare approaches and pick tools to try.

  • Best for: engineers doing memory design/selection for coding agents and long-running assistants
  • Works with: GitHub reading + your preferred papers/tools stack; use it as an index, not a framework
  • Setup time: 5–15 minutes

Practical Notes

  • Organized into products, tutorials, surveys, benchmarks, and paper sections (see README table of contents).
  • Use one benchmark to define your acceptance bar (latency, recall, token budget), then pick an approach.
  • Keep a “memory regression set”: 20–50 queries that used to work, to catch drift when you change memory policy.

Main

A selection workflow that actually works:

  1. Define what “memory” means for your agent: project facts, user preferences, tool state, or long transcripts.
  2. Decide your constraint triangle: latency, privacy, token budget.
  3. Pick a baseline approach (summaries + retrieval, vector store, graph/wiki, or hybrid).
  4. Evaluate on one benchmark + your own domain tasks, then iterate.

The key is avoiding “infinite context”. Good memory systems are selective: they store high-signal facts and can justify why a memory was retrieved.

FAQ

Q: Is vector search enough? A: Sometimes. For coding agents, you often need hybrid memory: durable facts + searchable artifacts + updated summaries.

Q: What’s the first metric to watch? A: Retrieval precision: how often retrieved items actually help the answer. Low precision is the fastest way to waste tokens.

Q: How do I prevent stale memory? A: Attach timestamps and sources; re-validate critical facts periodically and prune memories that don’t get used.

🙏

Source et remerciements

Source: https://github.com/TeleAI-UAGI/Awesome-Agent-Memory > License: Apache-2.0 > GitHub stars: 407 · forks: 28

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires