Automem
AutoMem is a graph-vector memory service that gives AI assistants durable, relational memory:
Install / Use
/learn @verygoodplugins/AutomemQuality Score
Category
Data & AnalyticsSupported Platforms
README
AI Memory That Actually Learns
AutoMem is a production-grade long-term memory system for AI assistants with transparent LoCoMo benchmark baselines (ACL 2024): 89.27% on locomo-mini categories 1-4 with category 5 skipped, and 87.56% on full locomo with the opt-in category-5 judge enabled. See benchmarks/EXPERIMENT_LOG.md for methodology and current baselines.
Deploy in 60 seconds:
railway up
Graph Viewer (Standalone)
The visualizer now runs as a separate service/repository (automem-graph-viewer).
AutoMem keeps /viewer as a compatibility entrypoint and forwards users to the standalone app.
Set these variables on the AutoMem API service:
ENABLE_GRAPH_VIEWER=true
GRAPH_VIEWER_URL=https://<your-viewer-domain>
VIEWER_ALLOWED_ORIGINS=https://<your-viewer-domain>
When users open /viewer/#token=..., AutoMem preserves the hash token and redirects to the standalone viewer with server=<automem-origin>.
Why AutoMem Exists
Your AI forgets everything between sessions. RAG dumps similar documents. Vector databases match keywords but miss meaning. None of them learn.
AutoMem gives AI assistants the ability to remember, connect, and evolve their understanding over time—just like human long-term memory.
How AutoMem Works
AutoMem combines two complementary storage systems:
- FalkorDB (Graph) - Stores memories as nodes with typed relationships between them
- Qdrant (Vectors) - Enables semantic similarity search via embeddings
This dual architecture lets you ask questions like "why did we choose PostgreSQL?" and get not just the memory, but the reasoning, preferences, and related decisions that informed it.
Core Capabilities
- 🧠 Store memories with metadata, importance scores, and temporal context
- 🔍 Recall via hybrid search combining semantic, keyword, graph, and temporal signals
- 🔗 Connect memories with 11 authorable relationship types, plus system-generated semantic and temporal edges
- 🎯 Learn through automatic entity extraction, pattern detection, and consolidation
- ⚡ Perform with sub-100ms recall across thousands of memories
Research Foundation
AutoMem implements techniques from peer-reviewed memory research:
- HippoRAG 2 (Ohio State, 2025): Graph-vector hybrid for associative memory
- A-MEM (2025): Dynamic memory organization with Zettelkasten-inspired clustering
- MELODI (DeepMind, 2024): Compression via gist representations
- ReadAgent (DeepMind, 2024): Context extension through episodic memory
Architecture
flowchart TB
subgraph service [AutoMem Service Flask]
API[REST API<br/>Memory Lifecycle]
Enrichment[Background Enrichment<br/>Pipeline]
Consolidation[Consolidation<br/>Engine]
Backups[Automated Backups<br/>Optional]
end
subgraph storage [Dual Storage Layer]
FalkorDB[(FalkorDB<br/>Graph Database)]
Qdrant[(Qdrant<br/>Vector Database)]
end
Client[AI Client] -->|Store/Recall/Associate| API
API --> FalkorDB
API --> Qdrant
Enrichment -->|11 edge types<br/>Pattern nodes| FalkorDB
Enrichment -->|Semantic search<br/>3072-d vectors| Qdrant
Consolidation --> FalkorDB
Consolidation --> Qdrant
Backups -.->|Optional| FalkorDB
Backups -.->|Optional| Qdrant
FalkorDB (graph) = canonical record, relationships, consolidation Qdrant (vectors) = semantic recall, similarity search Dual storage = Built-in redundancy and disaster recovery
Why Graph + Vector?
flowchart LR
subgraph trad [Traditional RAG Vector Only]
direction TB
Query1[Query: What database?]
VectorDB1[(Vector DB)]
Result1[✅ PostgreSQL memory<br/>❌ No reasoning<br/>❌ No connections]
Query1 -->|Similarity search| VectorDB1
VectorDB1 --> Result1
end
subgraph automem [AutoMem Graph + Vector]
direction TB
Query2[Query: What database?]
subgraph hybrid [Hybrid Search]
VectorDB2[(Qdrant<br/>Vectors)]
GraphDB2[(FalkorDB<br/>Graph)]
end
Result2[✅ PostgreSQL memory<br/>✅ PREFERS_OVER MongoDB<br/>✅ RELATES_TO team expertise<br/>✅ DERIVED_FROM boring tech]
Query2 --> VectorDB2
Query2 --> GraphDB2
VectorDB2 --> Result2
GraphDB2 --> Result2
end
Traditional RAG (Vector Only)
Memory: "Chose PostgreSQL for reliability"
Query: "What database should I use?"
Result: ✅ Finds the memory
❌ Doesn't know WHY you chose it
❌ Can't connect to related decisions
AutoMem (Graph + Vector)
Memory: "Chose PostgreSQL for reliability"
Graph: PREFERS_OVER MongoDB
RELATES_TO "team expertise" memory
DERIVED_FROM "boring technology" principle
Query: "What database should I use?"
Result: ✅ Finds the memory
✅ Knows your decision factors
✅ Shows related preferences
✅ Explains your reasoning pattern
How It Works in Practice
Multi-Hop Bridge Discovery
When you ask a question, AutoMem doesn't just find relevant memories—it finds the connections between them. This is called bridge discovery: following graph relationships to surface memories that link your initial results together.
graph TB
Query[User Query:<br/>Why boring tech for Kafka?]
Seed1[Seed Memory 1:<br/>PostgreSQL migration<br/>for operational simplicity]
Seed2[Seed Memory 2:<br/>Kafka vs RabbitMQ<br/>evaluation]
Bridge[Bridge Memory:<br/>Team prefers boring technology<br/>proven, debuggable systems]
Result[Result:<br/>AI understands architectural<br/>philosophy, not just isolated choices]
Query -->|Initial recall| Seed1
Query -->|Initial recall| Seed2
Seed1 -.->|DERIVED_FROM| Bridge
Seed2 -.->|DERIVED_FROM| Bridge
Bridge --> Result
Seed1 --> Result
Seed2 --> Result
Traditional RAG: Returns "Kafka" memories (misses the connection)
AutoMem bridge discovery:
- Seed 1: "Migrated to PostgreSQL for operational simplicity"
- Seed 2: "Evaluating Kafka vs RabbitMQ for message queue"
- Bridge: "Team prefers boring technology—proven, debuggable systems"
AutoMem finds the bridge that connects both decisions → Result: AI understands your architectural philosophy, not just isolated choices
How to enable:
- Set
expand_relations=truein recall requests (enabled by default) - Control depth with
relation_limitandexpansion_limitparameters - Results are ranked by relation strength, temporal relevance, and importance
Knowledge Graphs That Evolve
# After storing: "Migrated to PostgreSQL for operational simplicity"
AutoMem automatically:
├── Extracts entities (PostgreSQL, operational simplicity)
├── Auto-tags (entity:tool:postgresql, entity:concept:ops-simplicity)
├── Detects pattern ("prefers boring technology")
├── Links temporally (PRECEDED_BY migration planning)
└── Connects semantically (SIMILAR_TO "Redis deployment")
# Next query: "Should we use Kafka?"
AI recalls:
- PostgreSQL decision
- "Boring tech" pattern (reinforced across memories)
- Operational simplicity preference
→ Suggests: "Based on your pattern, consider RabbitMQ instead"
9-Component Hybrid Scoring
flowchart LR
Query[User Query:<br/>database migration<br/>tags=decision<br/>time=last month]
subgraph scoring [Hybrid Scoring Components]
direction TB
V[Vector 25%<br/>Semantic similarity]
K[Keyword 15%<br/>TF-IDF matching]
R[Relation 25%<br/>Graph strength]
C[Content 25%<br/>Token overlap]
T[Temporal 15%<br/>Time alignment]
Tag[Tag 10%<br/>Tag matching]
I[Importance 5%<br/>User priority]
Conf[Confidence 5%<br/>Memory confidence]
Rec[Recency 10%<br/>Freshness boost]
end
FinalScore[Final Score:<br/>Ranked by meaning,<br/>not just similarity]
Query --> V
Query --> K
Query --> R
Query --> C
Query --> T
Query --> Tag
Query --> I
Query --> Conf
Query --> Rec
V --> FinalScore
K --> FinalScore
R --> FinalScore
C --> FinalScore
T --> FinalScore
Tag --> FinalScore
I --> FinalScore
Conf --> FinalScore
Rec --> FinalScore
GET /recall?query=database%20migration&tags=decision&time_query=last%20month
# AutoMem combines nine signals:
score = vector×0.25 # Semantic similarity
+ keyword×0.15 # TF-IDF text matching
+ relation×0.25 # Graph relationship strength
+ content×0.25 # Direct token overlap
+ temporal×0.15 # Time alignment with query
+ tag×0.10 # Tag matching
+ importance×0.05 # User-assigned priority
+ confidence×0.05 # Memory confidence
+ recency×0.10 # Freshness boost
# Result: Memories ranked by meaning, not just similarity
Features
Core Memory Operations
- Store - Rich memories with metadata, importance, timestamps, embeddings
- Recall - Hybrid search (vector + keyword + tags + time windows)
- Update - Modify memories, auto-regenerate embeddings
- Delete - Remove from both graph and vector stores
- **Assoc
Related Skills
feishu-drive
350.8k|
things-mac
350.8kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
clawhub
350.8kUse the ClawHub CLI to search, install, update, and publish agent skills from clawhub.com
postkit
PostgreSQL-native identity, configuration, metering, and job queues. SQL functions that work with any language or driver
