SkillAgentSearch skills...

Memorycrystal

Persistent memory for AI agents — OpenClaw plugin + MCP server

Install / Use

/learn @memorycrystal/Memorycrystal
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop
Cursor

README

<!-- This repository is the open-source mirror of Memory Crystal. The hosted service and web app are maintained separately. --> <p align="center"> <a href="https://memorycrystal.ai"> <img src="https://raw.githubusercontent.com/memorycrystal/memorycrystal/main/assets/icon.svg" alt="Memory Crystal" width="80" height="80"> </a> </p> <h1 align="center">Memory Crystal</h1> <p align="center"> <strong>Your AI finally remembers.</strong> </p> <p align="center"> <a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue.svg?style=for-the-badge" alt="MIT License"></a> <a href="https://www.npmjs.com/package/crystal-memory"><img src="https://img.shields.io/npm/v/crystal-memory?style=for-the-badge&color=cb3837" alt="npm version"></a> <a href="https://memorycrystal.ai"><img src="https://img.shields.io/badge/Cloud-Online-brightgreen?style=for-the-badge" alt="Cloud Status"></a> </p> <p align="center"> <a href="https://memorycrystal.ai">Website</a> · <a href="https://memorycrystal.ai/docs">Docs</a> · <a href="https://memorycrystal.ai/dashboard">Dashboard</a> · <a href="https://github.com/memorycrystal/memorycrystal">GitHub</a> </p>

Memory Crystal is a persistent cognitive memory layer for AI assistants. It captures every conversation, extracts what matters, stores it in a vector-indexed knowledge graph, and injects the right memories before each response. Your AI stops forgetting between sessions.

Ships as an OpenClaw plugin, an MCP server for any compatible host, a Next.js dashboard, and a Convex-backed multi-tenant cloud.


🧠 The Context Engine

This isn't a vector database with a chat wrapper. The Context Engine is an active memory system that runs before every AI response.

 User message arrives
        │
        ▼
┌──────────────────────────────────────────────┐
│              CONTEXT ENGINE                   │
│                                               │
│  1. Time-ordered recent window (last ~30 msgs, │
│     7k char budget)                            │
│  2. Semantic search across STM + LTM           │
│  3. Knowledge graph boost — connected          │
│     memories ranked higher                     │
│  4. Adaptive recall mode (general/focused/deep)│
│  5. Inject top memories + recent context into  │
│     model context                              │
│                                               │
└──────────────────────────────────────────────┘
        │
        ▼
  AI responds with full context
        │
        ▼
┌──────────────────────────────────────────────┐
│            MEMORY EXTRACTION                  │
│                                               │
│  1. Capture raw message → STM                  │
│  2. LLM extracts durable memories → LTM       │
│  3. Async graph enrichment connects memories   │
│                                               │
└──────────────────────────────────────────────┘

Every response is informed by what came before. Every conversation feeds the next one.

🔮 Two Memory Layers

| Layer | What it stores | Retention | |---|---|---| | Short-term (STM) | Raw messages, verbatim | Rolling window (7–90 days by tier) | | Long-term (LTM) | Extracted facts, decisions, lessons, people, rules | Forever, vector-indexed |

STM gives your AI perfect short-term recall. LTM gives it permanent knowledge. Both are searched together, every turn.

🕸️ Knowledge Graph

Memories don't exist in isolation. An async background job connects related memories into a graph — decisions link to the lessons that informed them, people link to the projects they worked on, rules link to the events that created them.

When the Context Engine searches, memories with strong graph connections to the current topic get ranked higher. Your AI doesn't just remember facts — it understands relationships.

🗄️ Five Memory Stores

| Store | Purpose | Example | |---|---|---| | sensory | Raw observations and signals | "Andy sounds frustrated about the deploy" | | episodic | Events and experiences | "We shipped v2 on March 15" | | semantic | Facts and knowledge | "The API uses Convex for the backend" | | procedural | How-to and workflows | "Deploy with npm run convex:deploy" | | prospective | Plans and future intentions | "Need to add billing webhooks next sprint" |

Each store has different retention rules and search weights. The Context Engine knows which stores matter for which questions.

🏷️ Nine Memory Categories

decision · lesson · person · rule · event · fact · goal · workflow · conversation

Memories are tagged on extraction so recall is precise. Ask "why did we choose Convex?" and you get decisions. Ask "how do I deploy?" and you get procedures.

🎯 Adaptive Recall

Three recall modes, automatically selected based on context:

  • General — broad semantic search, good for open-ended questions
  • Focused — narrow search with high relevance threshold, good for specific lookups
  • Deep — multi-pass search with graph traversal, good for complex reasoning

The Context Engine picks the right mode. You don't configure anything.


⚡ Quick Start

curl -fsSL https://memorycrystal.ai/crystal | bash

This installs the OpenClaw plugin and sets up your memory backend. Choose during install:

  • Cloud — hosted at memorycrystal.ai, zero config
  • Self-hosted — your own Convex deployment, full data sovereignty
  • Local — SQLite only, no cloud, context engine only

After install, your AI has memory. Every conversation is captured, extracted, and searchable.


🛠️ Memory Tools

14 tools exposed via MCP and the OpenClaw plugin:

| Tool | What it does | |---|---| | crystal_remember | Store a memory manually — decisions, facts, lessons, anything worth keeping | | crystal_recall | Semantic search across all long-term memory | | crystal_what_do_i_know | Snapshot of everything known about a topic | | crystal_why_did_we | Decision archaeology — understand why a past decision was made | | crystal_checkpoint | Save a memory snapshot at a milestone | | crystal_search_messages | Search verbatim conversation history (STM) | | crystal_preflight | Pre-flight check before risky actions — returns relevant rules and lessons | | crystal_forget | Archive a memory | | crystal_wake | Session startup — loads briefing and guardrails | | crystal_recent | Fetch recent messages for short-term context | | crystal_stats | Memory and usage statistics | | crystal_who_owns | Find who owns a file, module, or area | | crystal_explain_connection | Explain the relationship between two concepts | | crystal_dependency_chain | Trace dependency chains between entities |

These tools work in any MCP-compatible host (Claude Desktop, Cursor, Windsurf, etc.) or automatically within OpenClaw.


📦 Architecture

memorycrystal/
├── plugin/                 OpenClaw plugin (crystal-memory)
│   ├── index.js            Plugin entry, hooks into conversation lifecycle
│   └── store/              Local SQLite store (offline fallback)
├── mcp-server/             MCP server (@memorycrystal/mcp-server)
│   └── src/index.ts        Exposes crystal_* tools over MCP protocol
├── packages/
│   └── mcp-server/         Streamable HTTP MCP server variant
├── apps/
│   └── web/                Next.js 15 dashboard (React 19, Tailwind 4)
│       ├── Memories viewer, session browser, API key management
│       └── Device flow auth (RFC 8628-style)
├── convex/                 Backend (Convex)
│   ├── schema.ts           Multi-tenant schema
│   └── crystal/            Capture, recall, sessions, graph enrichment
└── scripts/                Install, bootstrap, doctor, enable/disable

🧪 Testing

Unit tests (convex/crystal/__tests__/) — 5 test files using Vitest + convex-test:

| File | Covers | |---|---| | message-search.test.ts | Message vector search | | messageEmbeddings.test.ts | Embedding generation and storage | | messageTurns.test.ts | Multi-turn message handling | | multitenancy.test.ts | Cross-tenant isolation | | recall-ranking.test.ts | Recall result ranking and scoring |

Integration tests (packages/mcp-server/test/) — end-to-end tests against the MCP server HTTP API.

# Run unit tests
npx vitest                            # all unit tests (watch mode)
npx vitest run                        # single run (CI)

# Run integration tests (requires MEMORY_CRYSTAL_API_KEY env var)
node packages/mcp-server/test/integration.test.js

# Smoke test (plugin health check)
npm run test:smoke

# Capture end-to-end test
npm run test:capture-e2e

🔐 Security

  • Multi-tenant isolation — each user's memories are fully isolated at the database level; owner checks on every memory retrieval
  • API keys — SHA-256 hashed at rest; plaintext keys are never stored; transient device-flow tokens cleared after retrieval
  • Bearer auth — all API and MCP endpoints require Authorization: Bearer <key>
  • Per-key rate limiting — rate limits enforced per API key on all endpoints
  • Audit logging — all API actions (admin, impersonation, data access) are logged to crystalAuditLog
  • Prompt injection mitigation — recalled memories are injected as informational context only; wake briefings include a security header instructing the model to treat recalled content as non-directive
  • Auto-updater integrityplugin/update.sh verifies SHA-256 checksums against checksums.txt when available; update aborts on mismatch
  • Device flow auth — RFC 8628-style device code flow for CLI key provisioning
  • Local mode — SQLite fallback, your data never leaves your machine

🏠 Self-Hosted Setup

Run everything on your own infrastructure. You need:

  1. A Convex project (free tier works)
  2. OpenClaw instal
View on GitHub
GitHub Stars4
CategoryDevelopment
Updated23m ago
Forks0

Languages

TypeScript

Security Score

90/100

Audited on Mar 29, 2026

No findings