Resonant Mind
Persistent cognitive infrastructure for AI systems. 27 MCP tools — semantic memory, emotional processing, identity continuity, and a subconscious daemon. Built on Cloudflare Workers.
Install / Use
/learn @codependentai/Resonant MindQuality Score
Category
Development & EngineeringSupported Platforms
README
What It Does
Resonant Mind is a Model Context Protocol (MCP) server that provides 27 tools for persistent memory:
Core Memory
- Entities & Observations — Knowledge graph with typed entities, weighted observations, and contextual namespaces
- Semantic Search — Vector-powered search across all memory types with mood-tinted results
- Journals — Episodic memory with temporal tracking
- Relations — Entity-to-entity relationship mapping
Emotional Processing
- Sit & Resolve — Engage with emotional observations, track processing state
- Tensions — Hold productive contradictions that simmer
- Relational State — Track feelings toward people over time
- Inner Weather — Current emotional atmosphere
Cognitive Infrastructure
- Orient & Ground — Wake-up sequence: identity anchor, then active context
- Threads — Intentions that persist across sessions
- Identity Graph — Weighted, sectioned self-knowledge
- Context Layer — Situational awareness that updates in real-time
Living Surface
- Surface — 3-pool memory surfacing (core relevance, novelty, edge associations)
- Subconscious Daemon — Cron-triggered processing: mood analysis, hot entity detection, co-surfacing patterns, orphan identification
- Proposals — Daemon-suggested connections between observations
- Archive & Orphans — Memory lifecycle management
Visual Memory
- Image Storage — R2-backed with WebP conversion, multimodal Gemini embeddings
- Signed URLs — Time-limited, HMAC-signed image access
Architecture
┌─────────────────────────────────────────────┐
│ Cloudflare Worker │
│ │
│ MCP Protocol ←→ 27 Tool Handlers │
│ REST API ←→ Data Endpoints │
│ Cron Trigger ←→ Subconscious Daemon │
│ │
├─────────────────────────────────────────────┤
│ Storage Layer (choose one): │
│ • D1 (SQLite) + Vectorize — zero config │
│ • Postgres via Hyperdrive + pgvector │
│ │
│ R2 — Image storage │
│ Gemini Embedding 2 — 768d vectors │
└─────────────────────────────────────────────┘
The Postgres adapter implements D1's .prepare().bind().run() API with automatic SQL transformation (SQLite → Postgres syntax), so the same handler code works with both backends.
Prerequisites
You'll need:
- A Cloudflare account (free tier works)
- Node.js 18+ installed
- A Google AI Studio API key (free — for Gemini embeddings)
Getting Started
1. Clone and install
git clone https://github.com/codependentai/resonant-mind.git
cd resonant-mind
npm install
2. Choose your storage backend
Resonant Mind supports two storage options. Pick whichever fits your needs:
| | Option A: D1 | Option B: Neon Postgres | |---|---|---| | What is it? | Cloudflare's built-in SQLite database | Serverless Postgres with vector search | | Best for | Getting started quickly, smaller deployments | Production use, larger datasets | | Vector search | Cloudflare Vectorize | pgvector (built into Neon) | | Cost | Free tier available | Free tier available | | Setup complexity | Easier (all Cloudflare) | Moderate (Cloudflare + Neon) |
Option A: D1 Setup (Simpler)
D1 is Cloudflare's serverless SQLite database. Everything stays within Cloudflare.
Step 1: Create the database
npx wrangler d1 create resonant-mind
This will output a database ID. Copy it.
Step 2: Create a Vectorize index
Vectorize is Cloudflare's vector database — it stores the embeddings that power semantic search.
npx wrangler vectorize create resonant-mind-vectors --dimensions=768 --metric=cosine
Step 3: Create an R2 bucket for images
R2 is Cloudflare's object storage — it stores visual memories (images).
npx wrangler r2 bucket create resonant-mind-images
Step 4: Configure wrangler.toml
Add the D1 and Vectorize bindings to your wrangler.toml:
# Add these sections to wrangler.toml:
[[d1_databases]]
binding = "DB"
database_name = "resonant-mind"
database_id = "paste-your-database-id-here"
[[vectorize]]
binding = "VECTORS"
index_name = "resonant-mind-vectors"
The R2 bucket binding is already in wrangler.toml by default.
Step 5: Run the database migration
This creates all the tables your mind needs:
npx wrangler d1 migrations apply resonant-mind --remote
Now skip to Step 3: Set your secrets.
Option B: Neon Postgres Setup (Production)
Neon is a serverless Postgres provider with a generous free tier. Cloudflare Hyperdrive gives you connection pooling and low-latency access from Workers.
Step 1: Create a Neon project
- Sign up at neon.tech (free tier includes 0.5 GB storage)
- Create a new project — pick any region close to your Cloudflare Workers region
- Copy your connection string. It looks like:
postgresql://user:password@ep-something-12345.us-east-2.aws.neon.tech/neondb?sslmode=require
Step 2: Enable pgvector
In the Neon SQL Editor (or any Postgres client), run:
CREATE EXTENSION IF NOT EXISTS vector;
Step 3: Create the schema
In the Neon SQL Editor, paste and run the contents of migrations/postgres.sql. This creates all tables, indexes, and the vector embedding table with pgvector.
You can also run it from the command line using psql:
psql "postgresql://user:password@ep-something.us-east-2.aws.neon.tech/neondb?sslmode=require" -f migrations/postgres.sql
Step 4: Create a Hyperdrive config
Hyperdrive is Cloudflare's connection pooler — it sits between your Worker and Neon, keeping connections fast and reducing cold starts.
npx wrangler hyperdrive create resonant-mind-db \
--connection-string="postgresql://user:password@ep-something.us-east-2.aws.neon.tech/neondb?sslmode=require"
This will output a Hyperdrive ID. Copy it.
Step 5: Configure wrangler.toml
# Add to wrangler.toml:
[[hyperdrive]]
binding = "HYPERDRIVE"
id = "paste-your-hyperdrive-id-here"
You do NOT need D1 or Vectorize bindings — Resonant Mind automatically detects Hyperdrive and uses the Postgres adapters for both database queries and vector search.
Step 6: Create an R2 bucket for images
npx wrangler r2 bucket create resonant-mind-images
Now continue to the next step.
3. Set your secrets
Secrets are stored securely in Cloudflare — they never appear in your code.
# Required: Your API key (pick any strong random string — this authenticates all requests)
npx wrangler secret put MIND_API_KEY
# Required: Google Gemini API key (get one free at https://aistudio.google.com/apikey)
npx wrangler secret put GEMINI_API_KEY
Optional secrets:
# Separate signing key for image URLs (recommended for production)
npx wrangler secret put SIGNING_SECRET
# WeatherAPI.com key for inner weather context (free tier at https://www.weatherapi.com/)
npx wrangler secret put WEATHER_API_KEY
4. Deploy
npx wrangler deploy
Wrangler will output your worker URL, something like:
https://resonant-mind.your-subdomain.workers.dev
You can verify it's working:
curl https://resonant-mind.your-subdomain.workers.dev/health
# Should return: {"status":"ok","service":"resonant-mind"}
5. Connect to Claude
Claude Code (CLI)
Add to your MCP settings (.mcp.json in your project or ~/.claude/settings.json globally):
{
"mcpServers": {
"mind": {
"type": "url",
"url": "https://resonant-mind.your-subdomain.workers.dev/mcp",
"headers": {
"Authorization": "Bearer YOUR_MIND_API_KEY"
}
}
}
}
Replace YOUR_MIND_API_KEY with whatever you entered when setting the MIND_API_KEY secret.
Claude.ai (Web & Mobile)
For Claude.ai's MCP connector, you use a secret URL path instead
Related Skills
node-connect
334.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
82.3kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
Writing Hookify Rules
82.3kThis skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Hook Development
82.3kThis skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.
