SkillAgentSearch skills...

Openmemory

Share long-term memory across all your AI agents — no manual start/stop needed.

Install / Use

/learn @vancelin/Openmemory
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

OpenMemory Auto Manager

Auto start/stop OpenMemory MCP server across Claude Code, OpenCode, Codex CLI, and Gemini CLI.

Share long-term memory across all your AI agents — no manual start/stop needed.

OpenMemory Banner

繁體中文


How It Works

Terminal 1: type claude    → OpenMemory auto-starts (refcount = 1)
Terminal 2: type opencode   → shares same server    (refcount = 2)
Terminal 3: type gemini    → shares same server    (refcount = 3)
Terminal 4: type codex     → shares same server    (refcount = 4)
Terminal 1: exits          → still running         (refcount = 3)
Terminal 2: exits          → still running         (refcount = 2)
Terminal 3: exits          → still running         (refcount = 1)
Terminal 4: exits          → auto stops            (refcount = 0)

Reference counting ensures OpenMemory only runs when needed and shuts down when the last session closes.

Architecture

┌──────────────────────────────────────────────────────────────┐
│                    Your Machine                               │
│                                                              │
│  ┌─────────────┐  ┌────────────┐  ┌──────────────┐  ┌──────┐ │
│  │ Claude Code  │  │ OpenCode   │  │ Gemini CLI   │  │ Codex│ │
│  │             │  │            │  │              │  │ CLI  │ │
│  │ HTTP MCP ───┼──┼── HTTP MCP │──┼── HTTP MCP   │  │STDIO │ │
│  └─────────────┘  └────────────┘  └──────────────┘  │ only │ │
│         │                │                │         │      │ │
│         │                │                │         │Python│ │
│         │                │                │         │Proxy │ │
│         ▼                ▼                ▼         └──┬───┘ │
│  ┌─────────────────────────────────────────────────────┐    │
│  │       OpenMemory MCP Server                         │    │
│  │       localhost:8080                                 │    │
│  │                                                      │    │
│  │  • Synthetic embeddings (no API keys)               │    │
│  │  • SQLite vector store                              │    │
│  │  • HSG tiered memory architecture                   │    │
│  │  • MCP tools: store, query, list, delete            │    │
│  └─────────────────────────────────────────────────────┘    │
│                                                              │
│  Lifecycle managed by openmemory-manager.sh                  │
│  ┌────────────────────────────────┐                          │
│  │ _om_ensure_running()           │                          │
│  │ _om_ref_incr() / _om_decr()   │                          │
│  │ zshexit cleanup hook           │                          │
│  └────────────────────────────────┘                          │
└──────────────────────────────────────────────────────────────┘

Features

  • Zero config — no API keys, no Docker, no external services
  • Auto lifecycle — server starts on first CLI launch, stops when all terminals close
  • Cross-tool memory — Claude Code, OpenCode, Codex CLI, and Gemini CLI share the same memory store
  • Codex STDIO bridge — Python proxy translates STDIO MCP to HTTP MCP for Codex CLI
  • Reference counting — safe multi-terminal usage with automatic cleanup
  • Stale recovery — resets counter if server crashes unexpectedly

Prerequisites

  • OpenMemory server installed at ~/OpenMemory
  • Python 3.10+ with fastmcp (for Codex CLI proxy)
  • Node.js 18+ (for OpenMemory server)
  • curl, lsof (standard macOS/Linux tools)

Install OpenMemory Server

git clone https://github.com/CaviraOSS/OpenMemory.git ~/OpenMemory
cd ~/OpenMemory/packages/openmemory-js
npm install
npm run build

Install

git clone git@github.com:vancelin/openmemory.git ~/dev/memory
cd ~/dev/memory

# Create Python venv and install proxy dependency
uv venv .venv --python 3.11
uv pip install fastmcp --python .venv/bin/python

# Install shell hooks
chmod +x install.sh openmemory-manager.sh openmemory-codex-proxy.py
./install.sh

Restart your terminal, then run claude, opencode, codex, or gemini — OpenMemory starts automatically.

Files

| File | Purpose | |------|---------| | openmemory-manager.sh | Core lifecycle manager — start/stop with reference counting | | openmemory-codex-proxy.py | FastMCP STDIO→HTTP proxy for Codex CLI | | install.sh | One-command shell hook installer (idempotent) | | .mcp.json | Claude Code / OpenCode MCP config (HTTP transport) | | .mcp.json.example | Example MCP config for reference |

MCP Configuration

Claude Code / OpenCode — .mcp.json

Place in project root or ~/.claude/.mcp.json (Claude Code) / ~/.opencode/.mcp.json (OpenCode) for global access:

{
  "mcpServers": {
    "openmemory": {
      "type": "http",
      "url": "http://localhost:8080/mcp"
    }
  }
}

Note: OpenCode supports the same .mcp.json format as Claude Code for MCP server configuration.

Gemini CLI — ~/.gemini/settings.json

{
  "mcpServers": {
    "openmemory": {
      "httpUrl": "http://localhost:8080/mcp",
      "trust": true
    }
  }
}

Codex CLI — ~/.codex/config.toml

[mcp_servers.openmemory]
command = "/path/to/.venv/bin/python"
args = ["/path/to/openmemory-codex-proxy.py"]

Codex CLI only supports STDIO transport. The proxy script uses FastMCP's create_proxy to transparently bridge all MCP tool calls to OpenMemory's HTTP endpoint.

Available MCP Tools

OpenMemory exposes these tools to all connected agents:

| Tool | Description | |------|-------------| | openmemory_store | Store a memory (text, facts, or both) | | openmemory_query | Semantic search across stored memories | | openmemory_list | List recent memories | | openmemory_get | Fetch a single memory by ID | | openmemory_reinforce | Boost salience of a memory | | openmemory_delete | Delete a memory by ID |

Usage

Once OpenMemory is running, all three CLIs can store and search memories through the MCP tools. Just ask naturally:

Storing Memories

You: Remember that I prefer dark mode in all editors
Claude: *calls openmemory_store* → Stored!
You: 我喜歡用 Python 寫後端,前端用 React
Gemini: *calls openmemory_store* → Stored!
You: Note that the production DB port is 5433, not 5432
Codex: *calls openmemory_store* → Stored!

Searching Memories

You: What's my editor preference?
Claude: *calls openmemory_query("editor preference")* → "You prefer dark mode in all editors"
You: 生產環境的 DB port 是多少?
Gemini: *calls openmemory_query("production DB port")* → "5433"

Cross-Tool Recall

# Stored in Claude Code
You (Claude): Remember my project uses PostgreSQL 16

# Recalled in Gemini CLI
You (Gemini): What database does my project use?
Gemini: *calls openmemory_query* → "Your project uses PostgreSQL 16"

That's it — memories stored in one CLI are instantly available in all others.

Screenshots

Claude Code — storing and querying memories via OpenMemory MCP tools:

Claude Code

Gemini CLI — reading memories stored by other agents:

Gemini CLI

Environment Variables

| Variable | Default | Description | |----------|---------|-------------| | OPENMEMORY_DIR | ~/OpenMemory/packages/openmemory-js | OpenMemory server path | | OPENMEMORY_PORT | 8080 | Server port | | OPENMEMORY_URL | http://localhost:8080 | Codex proxy target URL | | OPENMEMORY_LOCK_DIR | /tmp/openmemory | Lock/refcount directory | | OPENMEMORY_LOG | /tmp/openmemory.log | Server log file | | OPENMEMORY_USER_ID | $(whoami) | Default user ID for memory isolation |

Uninstall

Remove the hooks block from ~/.zshrc (between the OpenMemory auto-start/stop hooks markers):

sed -i '' '/# ── OpenMemory auto-start/,/# ── End OpenMemory hooks/d' ~/.zshrc

Then remove the MCP configs from each CLI's settings file.

License

MIT


<a id="繁體中文版"></a>

繁體中文版

OpenMemory 自動管理器

跨 Claude Code、OpenCode、Codex CLI、Gemini CLI 自動啟動/關閉 OpenMemory MCP 記憶伺服器。

讓所有 AI Agent 共用長期記憶 — 無需手動啟停。

OpenMemory Banner

運作原理

終端機 1: 輸入 claude    → OpenMemory 自動啟動  (refcount = 1)
終端機 2: 輸入 opencode   → 共用同一個伺服器      (refcount = 2)
終端機 3: 輸入 gemini    → 共用同一個伺服器      (refcount = 3)
終端機 4: 輸入 codex     → 共用同一個伺服器      (refcount = 4)
終端機 1: 關閉           → 仍然運行             (refcount = 3)
終端機 2: 關閉           → 仍然運行             (refcount = 2)
終端機 3: 關閉           → 仍然運行             (refcount = 1)
終端機 4: 關閉           → 自動停止             (refcount = 0)

參考計數機制確保 OpenMemory 只在需要時運行,最後一個終端機關閉時自動停止。

架構

┌──────────────────────────────────────────────────────────────┐
│                      你的電腦                                 │
│                                                              │
│  ┌─────────────┐  ┌────────────┐  ┌──────────────┐  ┌──────┐ │
│  │ Claude Code  │  │ OpenCode   │  │ Gemini CLI   │  │ Codex│ │
│  │             │  │            │  │              │  │ CLI  │ │
│  │ HTTP MCP ───┼──┼── HTTP MCP │──┼── HTTP MCP   │  │僅STDIO│ │
│  └─────────────┘  └────────────┘  └──────────────┘  │      │ │
│         │                │                │         │Python│ │
│         │                │                │         │Proxy │ │
│         ▼                ▼                ▼         └──┬───┘ │
│  ┌─────────────────────────────────────────────────────┐    │
│  │       OpenMemory MCP 伺服器                         │    │
│  │       localhost:8080                                 │    │
│  │                                                      │    │
│  │  • 合成式嵌入(不需要 API Key)                      │    │
│  │  • SQLite 向量資料庫                                 │    │
│  │  • HSG 分層記憶架構                                  │    │
│  │  • MCP 工具:儲存、查詢、列表、刪除              

Related Skills

View on GitHub
GitHub Stars74
CategoryDevelopment
Updated15h ago
Forks18

Languages

Shell

Security Score

80/100

Audited on Apr 8, 2026

No findings