Omem
Shared Memory That Never Forgets — persistent memory for AI agents with Space-based sharing across agents and teams. Plugins for OpenCode, Claude Code, OpenClaw, MCP Server.
Install / Use
/learn @ourmem/OmemQuality Score
Category
Development & EngineeringSupported Platforms
README
The Problem
Your AI agents have amnesia — and they work alone.
- 🧠 Amnesia — every session starts from zero. Preferences, decisions, context — all gone.
- 🏝️ Silos — your Coder agent can't access what your Writer agent learned.
- 📁 Local lock-in — memory tied to one machine. Switch devices, lose everything.
- 🚫 No sharing — team agents can't share what they know. Every agent re-discovers the same things.
- 🔍 Dumb recall — keyword match only. No semantic understanding, no relevance ranking.
- 🧩 No collective intelligence — even when agents work on the same team, there's no shared knowledge layer.
ourmem fixes all of this.
What is ourmem
ourmem gives AI agents shared persistent memory — across sessions, devices, agents, and teams. One API key reconnects everything.
🌐 Website: ourmem.ai
<table> <tr> <td width="50%" valign="top">🧑💻 I use AI coding tools
Install the plugin for your platform. Memory works automatically — your agent recalls past context on session start and captures key info on session end.
→ Jump to Quick Start
</td> <td width="50%" valign="top">🔧 I'm building AI products
REST API with 48+ endpoints. Docker one-liner for self-deploy. Embed persistent memory into your own agents and workflows.
→ Jump to Self-Deploy
</td> </tr> </table>Core Capabilities
<table> <tr> <td width="25%" align="center"> <h4>🔗 Shared Across Boundaries</h4> Three-tier Spaces — Personal, Team, Organization — let knowledge flow across agents and teams with full provenance tracking. </td> <td width="25%" align="center"> <h4>🧠 Never Forget</h4> Weibull decay model manages the memory lifecycle — core memories persist, peripheral ones gracefully fade. No manual cleanup. </td> <td width="25%" align="center"> <h4>🔍 Deep Understanding</h4> 11-stage hybrid retrieval: vector search, BM25, RRF fusion, cross-encoder reranking, and MMR diversity for precise recall. </td> <td width="25%" align="center"> <h4>⚡ Smart Evolution</h4> 7-decision reconciliation — CREATE, MERGE, SUPERSEDE, SUPPORT, CONTEXTUALIZE, CONTRADICT, or SKIP — makes memories smarter over time. </td> </tr> </table>📖 Memory Pipeline Architecture — Technical deep-dive into how ourmem stores, retrieves, and evolves memories.
🔗 Memory Sharing Architecture — How memories flow across agents and teams: sharing, provenance, versioning, and cross-space search.
Feature Overview
| Category | Feature | Details |
|----------|---------|---------|
| Platforms | 4 platforms | OpenCode, Claude Code, OpenClaw, MCP Server |
| Sharing | Space-based sharing | Personal / Team / Organization with provenance |
| | Provenance tracking | Every shared memory carries full lineage |
| | Quality-gated auto-sharing | Rules fire on memory creation (async, non-blocking) |
| | Vector-enabled shared copies | Shared copies carry source vector embeddings for full search |
| | Idempotent sharing | Re-sharing returns existing copy (no duplicates) |
| | Version tracking | Memories track version counter, shared copies detect staleness via ?check_stale=true |
| | Re-share stale copies | Refresh outdated shared copies with latest source content and vector |
| | Convenience sharing | One-step cross-user share (share-to-user) and bulk share (share-all-to-user) with auto-bridging |
| | Organization management | One-step org creation (org/setup) and publish (org/publish) with auto-share rules |
| | Cross-space search | Search across all accessible spaces at once |
| Ingestion | Smart dedup | 7 decisions: CREATE, MERGE, SKIP, SUPERSEDE, SUPPORT, CONTEXTUALIZE, CONTRADICT |
| | Noise filter | Regex + vector prototypes + feedback learning |
| | Admission control | 5-dimension scoring gate (utility, confidence, novelty, recency, type prior) |
| | Dual-stream write | Sync fast path (<50ms) + async LLM extraction |
| | Post-import intelligence | Batch import → async LLM re-extraction + relation discovery |
| | Adaptive import strategy | Auto/atomic/section/document — heuristic content type detection |
| | Content fidelity | Original text preserved, dual-path search (vector + BM25 on source text) |
| | Cross-reconcile | Discover relations between memories via vector similarity |
| | Batch self-dedup | LLM deduplicates facts within same import batch |
| | Privacy protection | <private> tag redaction before storage |
| Retrieval | 11-stage pipeline | Vector + BM25 → RRF → reranker → decay → importance → MMR diversity |
| | User Profile | Static facts + dynamic context, <100ms |
| | Retrieval trace | Per-stage explainability (input/output/score/duration) |
| Lifecycle | Weibull decay | Tier-specific β (Core=0.8, Working=1.0, Peripheral=1.3) |
| | Three-tier promotion | Peripheral ↔ Working ↔ Core with access-based promotion |
| | Auto-forgetting | TTL detection for time-sensitive info ("tomorrow", "next week") |
| Multi-modal | File processing | PDF, image OCR, video transcription, code AST chunking |
| | GitHub connector | Real-time webhook sync for code, issues, PRs |
| Deploy | Open source | Apache-2.0 (plugins + docs) |
| | Self-hostable | Single binary, Docker one-liner, ~$5/month |
| | musl static build | Zero-dependency binary for any Linux x86_64 |
| | Object storage | Alibaba Cloud OSS or S3-compatible, with ECS RAM role support |
| | Hosted option | ourmem.ai — nothing to deploy |
From Isolated Agents to Collective Intelligence
Most AI memory systems trap knowledge in silos. ourmem's three-tier Space architecture enables knowledge flow across agents and teams — with provenance tracking and quality-gated sharing.
Research shows collaborative memory reduces redundant work by up to 61% — agents stop re-discovering what their teammates already know. — Collaborative Memory, ICLR 2026
| | Personal | Team | Organization | |---|----------|------|--------------| | Scope | One user, multiple agents | Multiple users | Company-wide | | Example | Coder + Writer share preferences | Backend team shares arch decisions | Tech standards, security policies | | Access | Owner's agents only | Team members | All org members (read-only) |
Provenance-tracked sharing — every shared memory carries its lineage: who shared it, when, and where it came from. Shared copies include the source memory's vector embedding, so they're fully searchable in the target space.
Quality-gated auto-sharing — rules filter by importance, category, and tags. Rules fire automatically when new memories are created. Only high-value insights cross space boundaries.
How It Works
┌──────────────────────────────────────────────────────────────────┐
│ Your AI Agent (OpenCode / Claude Code / OpenClaw / Cursor) │
│ │
│ Session Start → auto-recall relevant memories │
│ During Work → keyword detection triggers recall │
│ Session End → auto-capture decisions, preferences, facts │
└───────────────────────────┬──────────────────────────────────────┘
│ REST API (X-API-Key)
▼
┌──────────────────────────────────────────────────────────────────┐
│ ourmem Server │
│ │
│ ┌─ Smart Ingest ─────────────────────────────────────────────┐ │
│ │ Messages → LLM extraction → noise filter → admission │ │
│ │ → 7-decision reconciliation (CREATE / MERGE / SUPERSEDE / │ │
│ │ SUPPORT / CONTEXTUALIZE / CONTRADICT / SKIP) │ │
│ │ → cross-reconcile relations → privacy redaction │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─ Hybrid Search (11 stages) ────────────────────────────────┐ │
│ │ Vector + BM25 → RRF fusion → cross-encoder reranker │ │
│ │ → Weibull decay boost → importance scoring │ │
│ │ → MMR diversity → parallel cross-space aggregation │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─ Sharing Engine ───────────────────────────────────────────┐ │
│ │ Personal / Team / Organization spaces │ │
│ │ → provenance tracking → version-based stale detection │ │
│ │ → auto-share rules → one-step share-to-user │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─ Lifecycle ────────────────────────────────────────────────┐ │
│ │ Weibull decay (Core β=0.8 / Working β=1.0 / Peripheral │ │
│ │ β=1.3) → 3-tier promotion → auto-forgetting TTL │ │
│ └────────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘
- Write once, recall everywhere — memories persist across sessions, devices, and agents
- Gets smarter over time — reconciliation merges, updates, and contradicts memories automatically
- Share across boundaries — Personal → Team → Or
