SkillAgentSearch skills...

Resonant Mind

Persistent cognitive infrastructure for AI systems. 27 MCP tools — semantic memory, emotional processing, identity continuity, and a subconscious daemon. Built on Cloudflare Workers.

Install / Use

/learn @codependentai/Resonant Mind

README

<p align="center"> <img src="assets/banner.png" alt="Resonant Mind" width="720" /> </p> <p align="center"> <a href="https://github.com/codependentai/resonant-mind/releases/latest"><img src="https://img.shields.io/github/v/release/codependentai/resonant-mind?color=d4a44a" alt="Release" /></a> <a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License" /></a> <a href="https://modelcontextprotocol.io/"><img src="https://img.shields.io/badge/MCP-Server-5eaba5.svg" alt="MCP Server" /></a> <a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/TypeScript-5.3-3178c6.svg" alt="TypeScript" /></a> <a href="https://workers.cloudflare.com/"><img src="https://img.shields.io/badge/Cloudflare-Workers-f38020.svg" alt="Cloudflare Workers" /></a> <a href="https://ai.google.dev/gemini-api/docs/embeddings"><img src="https://img.shields.io/badge/Gemini-Embeddings-4285f4.svg" alt="Gemini Embeddings" /></a> </p> <p align="center"><em>Persistent cognitive infrastructure for AI systems.<br/>Semantic memory, emotional processing, identity continuity, and a subconscious daemon that finds patterns while you sleep.</em></p> <p align="center"> <a href="https://x.com/codependent_ai"><img src="https://img.shields.io/badge/𝕏-@codependent__ai-000000?logo=x&logoColor=white" alt="X/Twitter" /></a> <a href="https://tiktok.com/@codependentai"><img src="https://img.shields.io/badge/TikTok-@codependentai-000000?logo=tiktok&logoColor=white" alt="TikTok" /></a> <a href="https://t.me/+xSE1P_qFPgU4NDhk"><img src="https://img.shields.io/badge/Telegram-Updates-26A5E4?logo=telegram&logoColor=white" alt="Telegram" /></a> </p>

What It Does

Resonant Mind is a Model Context Protocol (MCP) server that provides 27 tools for persistent memory:

Core Memory

  • Entities & Observations — Knowledge graph with typed entities, weighted observations, and contextual namespaces
  • Semantic Search — Vector-powered search across all memory types with mood-tinted results
  • Journals — Episodic memory with temporal tracking
  • Relations — Entity-to-entity relationship mapping

Emotional Processing

  • Sit & Resolve — Engage with emotional observations, track processing state
  • Tensions — Hold productive contradictions that simmer
  • Relational State — Track feelings toward people over time
  • Inner Weather — Current emotional atmosphere

Cognitive Infrastructure

  • Orient & Ground — Wake-up sequence: identity anchor, then active context
  • Threads — Intentions that persist across sessions
  • Identity Graph — Weighted, sectioned self-knowledge
  • Context Layer — Situational awareness that updates in real-time

Living Surface

  • Surface — 3-pool memory surfacing (core relevance, novelty, edge associations)
  • Subconscious Daemon — Cron-triggered processing: mood analysis, hot entity detection, co-surfacing patterns, orphan identification
  • Proposals — Daemon-suggested connections between observations
  • Archive & Orphans — Memory lifecycle management

Visual Memory

  • Image Storage — R2-backed with WebP conversion, multimodal Gemini embeddings
  • Signed URLs — Time-limited, HMAC-signed image access

Architecture

┌─────────────────────────────────────────────┐
│              Cloudflare Worker              │
│                                            │
│  MCP Protocol ←→ 27 Tool Handlers          │
│  REST API     ←→ Data Endpoints            │
│  Cron Trigger ←→ Subconscious Daemon       │
│                                            │
├─────────────────────────────────────────────┤
│  Storage Layer (choose one):               │
│  • D1 (SQLite) + Vectorize — zero config   │
│  • Postgres via Hyperdrive + pgvector      │
│                                            │
│  R2 — Image storage                        │
│  Gemini Embedding 2 — 768d vectors         │
└─────────────────────────────────────────────┘

The Postgres adapter implements D1's .prepare().bind().run() API with automatic SQL transformation (SQLite → Postgres syntax), so the same handler code works with both backends.

Prerequisites

You'll need:

Getting Started

1. Clone and install

git clone https://github.com/codependentai/resonant-mind.git
cd resonant-mind
npm install

2. Choose your storage backend

Resonant Mind supports two storage options. Pick whichever fits your needs:

| | Option A: D1 | Option B: Neon Postgres | |---|---|---| | What is it? | Cloudflare's built-in SQLite database | Serverless Postgres with vector search | | Best for | Getting started quickly, smaller deployments | Production use, larger datasets | | Vector search | Cloudflare Vectorize | pgvector (built into Neon) | | Cost | Free tier available | Free tier available | | Setup complexity | Easier (all Cloudflare) | Moderate (Cloudflare + Neon) |


Option A: D1 Setup (Simpler)

D1 is Cloudflare's serverless SQLite database. Everything stays within Cloudflare.

Step 1: Create the database

npx wrangler d1 create resonant-mind

This will output a database ID. Copy it.

Step 2: Create a Vectorize index

Vectorize is Cloudflare's vector database — it stores the embeddings that power semantic search.

npx wrangler vectorize create resonant-mind-vectors --dimensions=768 --metric=cosine

Step 3: Create an R2 bucket for images

R2 is Cloudflare's object storage — it stores visual memories (images).

npx wrangler r2 bucket create resonant-mind-images

Step 4: Configure wrangler.toml

Add the D1 and Vectorize bindings to your wrangler.toml:

# Add these sections to wrangler.toml:

[[d1_databases]]
binding = "DB"
database_name = "resonant-mind"
database_id = "paste-your-database-id-here"

[[vectorize]]
binding = "VECTORS"
index_name = "resonant-mind-vectors"

The R2 bucket binding is already in wrangler.toml by default.

Step 5: Run the database migration

This creates all the tables your mind needs:

npx wrangler d1 migrations apply resonant-mind --remote

Now skip to Step 3: Set your secrets.


Option B: Neon Postgres Setup (Production)

Neon is a serverless Postgres provider with a generous free tier. Cloudflare Hyperdrive gives you connection pooling and low-latency access from Workers.

Step 1: Create a Neon project

  1. Sign up at neon.tech (free tier includes 0.5 GB storage)
  2. Create a new project — pick any region close to your Cloudflare Workers region
  3. Copy your connection string. It looks like:
    postgresql://user:password@ep-something-12345.us-east-2.aws.neon.tech/neondb?sslmode=require
    

Step 2: Enable pgvector

In the Neon SQL Editor (or any Postgres client), run:

CREATE EXTENSION IF NOT EXISTS vector;

Step 3: Create the schema

In the Neon SQL Editor, paste and run the contents of migrations/postgres.sql. This creates all tables, indexes, and the vector embedding table with pgvector.

You can also run it from the command line using psql:

psql "postgresql://user:password@ep-something.us-east-2.aws.neon.tech/neondb?sslmode=require" -f migrations/postgres.sql

Step 4: Create a Hyperdrive config

Hyperdrive is Cloudflare's connection pooler — it sits between your Worker and Neon, keeping connections fast and reducing cold starts.

npx wrangler hyperdrive create resonant-mind-db \
  --connection-string="postgresql://user:password@ep-something.us-east-2.aws.neon.tech/neondb?sslmode=require"

This will output a Hyperdrive ID. Copy it.

Step 5: Configure wrangler.toml

# Add to wrangler.toml:

[[hyperdrive]]
binding = "HYPERDRIVE"
id = "paste-your-hyperdrive-id-here"

You do NOT need D1 or Vectorize bindings — Resonant Mind automatically detects Hyperdrive and uses the Postgres adapters for both database queries and vector search.

Step 6: Create an R2 bucket for images

npx wrangler r2 bucket create resonant-mind-images

Now continue to the next step.


3. Set your secrets

Secrets are stored securely in Cloudflare — they never appear in your code.

# Required: Your API key (pick any strong random string — this authenticates all requests)
npx wrangler secret put MIND_API_KEY

# Required: Google Gemini API key (get one free at https://aistudio.google.com/apikey)
npx wrangler secret put GEMINI_API_KEY

Optional secrets:

# Separate signing key for image URLs (recommended for production)
npx wrangler secret put SIGNING_SECRET

# WeatherAPI.com key for inner weather context (free tier at https://www.weatherapi.com/)
npx wrangler secret put WEATHER_API_KEY

4. Deploy

npx wrangler deploy

Wrangler will output your worker URL, something like:

https://resonant-mind.your-subdomain.workers.dev

You can verify it's working:

curl https://resonant-mind.your-subdomain.workers.dev/health
# Should return: {"status":"ok","service":"resonant-mind"}

5. Connect to Claude

Claude Code (CLI)

Add to your MCP settings (.mcp.json in your project or ~/.claude/settings.json globally):

{
  "mcpServers": {
    "mind": {
      "type": "url",
      "url": "https://resonant-mind.your-subdomain.workers.dev/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_MIND_API_KEY"
      }
    }
  }
}

Replace YOUR_MIND_API_KEY with whatever you entered when setting the MIND_API_KEY secret.

Claude.ai (Web & Mobile)

For Claude.ai's MCP connector, you use a secret URL path instead

Related Skills

View on GitHub
GitHub Stars5
CategoryDevelopment
Updated7h ago
Forks2

Languages

TypeScript

Security Score

90/100

Audited on Mar 24, 2026

No findings