Mentisdb
Memory that lasts and compounds. MentisDB gives agents durable memory so they do not just remember, they improve over time. It stores append-only thought chains plus a Git-like skills registry, letting skills evolve with experience through versioned, integrity-checked updates and fast retrieval.
Install / Use
/learn @CloudLLM-ai/MentisdbQuality Score
Category
Development & EngineeringSupported Platforms
README
MentisDB
MentisDB is a durable semantic memory engine and versioned skill registry for AI agents — a persistent, hash-chained brain that survives context resets, model swaps, and team turnover.
It stores semantically typed thoughts in an append-only, hash-chained memory log through a swappable storage adapter layer. The skill registry is a git-like immutable version store for agent instruction bundles — every upload is a new version, history is never overwritten, and every version is cryptographically signable.
Why MentisDB
Harness Swapping — the same durable memory works across every AI coding environment. Connect Claude Code, OpenAI Codex, GitHub Copilot CLI, Qwen Code, Cursor, VS Code, or any MCP-capable host to the same mentisdbd daemon and your agents share one brain, regardless of which tool you picked up today.
Zero Knowledge Loss Across Context Boundaries — when an agent's context window fills, it writes a Summary checkpoint to MentisDB, compacts, reloads mentisdb_recent_context, and continues without losing a single decision. Chat history is ephemeral. MentisDB is permanent.
Fleet Orchestration at Scale — one project manager agent decomposes work, dispatches a parallel fleet of specialists, each pre-warmed with shared memory, and synthesizes results wave by wave. MentisDB is the coordination substrate: every agent reads from the same chain and writes its lessons back. The fleet's collective intelligence compounds.
Versioned Skill Registry — skills are not just stored, they are versioned like a git repository. Every upload to an existing skill_id creates a new immutable version (stored as a unified diff). Any historical version is reconstructable. Skills can be deprecated or revoked while full audit history is preserved. Uploading agents with registered Ed25519 keys must cryptographically sign their uploads — provenance is verifiable, not assumed.
Session Resurrection — any agent can call mentisdb_recent_context and immediately know exactly where the project stands, what decisions were made, what traps were already hit, and what comes next — without re-reading code, re-running exploratory searches, or asking the human to re-explain context that was earned through hours of work.
Self-Improving Agent Fleets — agents upload updated skill files after learning something new. A skill checked in at the start of a project is better by the end of it. Combine with Ed25519 signing to create a verifiable, tamper-evident record of which agent authored which version of institutional knowledge.
Multi-Agent Shared Brain — multiple agents, multiple roles, multiple owners can write to the same chain key simultaneously. Every thought carries a stable agent_id. Queries filter by agent identity, thought type, role, tags, concepts, importance, and time windows. The chain represents the full collective intelligence of an entire orchestration system, not just one session.
Lessons That Outlive Models — architectural decisions, hard constraints, non-obvious failure modes, and retrospectives written to MentisDB survive chat loss, model upgrades, and team changes. The knowledge compounds instead of evaporating. A new engineer or a new agent boots up, loads the chain, and inherits everything the team learned.
Quick Start
Install the daemon:
cargo install mentisdb
Connect your local AI tools the fast way:
mentisdbd wizard
Or target one integration explicitly:
mentisdbd setup codex
mentisdbd setup all --dry-run
Then start the daemon:
mentisdbd
On an interactive first run with no configured client integrations,
mentisdbd offers to launch the setup wizard immediately after startup so you
do not have to guess the next command.
Run persistently after closing your SSH session:
nohup mentisdbd &
Modern MCP clients bootstrap themselves from the MCP handshake:
initialize.instructionstells the agent to readmentisdb://skill/coreresources/read(mentisdb://skill/core)delivers the embedded operating skillGET /mentisdb_skill_mdremains available only as a compatibility fallback
If you need to wire a tool manually, here are the raw MCP commands/configs:
# Claude Code
claude mcp add --transport http mentisdb http://127.0.0.1:9471
# OpenAI Codex
codex mcp add mentisdb --url http://127.0.0.1:9471
# Qwen Code
qwen mcp add --transport http mentisdb http://127.0.0.1:9471
# GitHub Copilot CLI — use /mcp add in interactive mode,
# or write ~/.copilot/mcp-config.json manually (see below)
What Is In This Folder
mentisdb/ contains:
- the standalone
mentisdblibrary crate - server support for HTTP MCP and REST, enabled by default
- the
mentisdbddaemon binary - dedicated tests under
mentisdb/tests
Makefile
A Makefile is included at the repository root. All common workflows have a target:
make build # fmt + release build
make build-mentisdbd # build only the daemon binary
make release # fmt, check, clippy, build, test, doc in sequence
make fmt # cargo fmt
make check # cargo check (lib + binary)
make clippy # cargo fmt + clippy --all-targets -D warnings
make test # cargo test
make bench # Criterion benchmarks, output tee'd to /tmp/mentisdb_bench_results.txt
make doc # cargo doc --all-features
make install # cargo install --path . --locked
make publish # cargo publish
make publish-dry-run
make clean
make help # list all targets with descriptions
Build
make build
Or directly with Cargo:
cargo build --release
Build only the library without the default daemon/server stack:
cargo build --no-default-features
Test
make test
Or directly:
cargo test
Run tests for the library-only build:
cargo test --no-default-features
Run rustdoc tests:
cargo test --doc
Benchmarks
MentisDB ships a Criterion benchmark suite and a harness-free HTTP concurrency benchmark:
make bench
Or directly:
cargo bench
Results are also written to /tmp/mentisdb_bench_results.txt so numbers persist across terminal sessions.
Benchmark coverage:
benches/thought_chain.rs— 10 benchmarks: append throughput, query latency, traversal patternsbenches/search_baseline.rs— 4 benchmarks: lexical/filter-first search baseline over content, registry text, indexed+text intersections, and newest-tail limitsbenches/search_ranked.rs— 4 benchmarks: additive ranked retrieval over lexical content, filtered ranked queries, and heuristic fallback, plus a baseline append-order comparisonbenches/skill_registry.rs— 12 benchmarks: skill upload, search, delta reconstruction, lifecyclebenches/http_concurrency.rs— startsmentisdbdin-process on a random port; measures write and read throughput at 100 / 1k / 10k concurrent Tokio tasks with p50/p95/p99 latency reporting
Baseline numbers from the DashMap concurrent chain lookup refactor: 750–930 read req/s at 10k concurrent tasks, compared to a sequential bottleneck on the previous RwLock<HashMap> implementation.
Generate Docs
make doc
Or directly:
cargo doc --no-deps
Generate docs for the library-only build:
cargo doc --no-deps --no-default-features
Run The Daemon
The standalone executable is mentisdbd.
Run it from source:
cargo run --bin mentisdbd
Install it from the crate directory:
make install
# or
cargo install --path . --locked
mentisdbd now owns both daemon startup and local integration setup:
mentisdbd setup codex
mentisdbd setup all --dry-run
mentisdbd wizard
mentisdbd
When it starts, it serves:
- an MCP server
- a REST server
- an HTTPS web dashboard
Before serving traffic, it:
- migrates or reconciles discovered chains to the current schema and default storage adapter
- verifies chain integrity and attempts repair from valid local sources when possible
- migrates the skill registry from V1 to V2 format if needed (idempotent; safe to run repeatedly)
Once startup completes, it prints:
- the active chain directory, default chain key, and bound MCP/REST/dashboard addresses
- a catalog of all exposed HTTP endpoints with one-line descriptions
- a per-chain summary with version, adapter, thought count, and per-agent counts
Daemon Configuration
mentisdbd is configured with environment variables:
MENTISDB_DIRDirectory where MentisDB storage adapters store chain files.MENTISDB_DEFAULT_CHAIN_KEYDefaultchain_keyused when requests omit one. Default:borganism-brain.MENTISDB_DEFAULT_KEYis accepted as a deprecated alias.MENTISDB_STORAGE_ADAPTERDefault storage backend for newly created chains. Supported values:binary,jsonl. Default:binaryMENTISDB_VERBOSEWhen unset, verbose interaction logging defaults totrue. Supported explicit values:1,0,true,false.MENTISDB_LOG_FILEOptional path for interaction logs. When set, MentisDB writes interaction logs to that file even if console verbosity is disabled. IfMENTISDB_VERBOSE=true, the same lines are also mirrored to the console logger.MENTISDB_BIND_HOSTBind host for both HTTP servers. Default:127.0.0.1MENTISDB_MCP_PORTMCP server port. Default:9471MENTISDB_REST_PORTREST server port. Default:9472MENTISDB_DASHBOARD_PORTHTTPS dashboard port. Default:9475. Set to0to disable the web dashboard.MENTISDB_DASHBOARD_PINOptional PIN required to access the dashboard. Leave unset only for trusted localhost use.MENTISDB_AUTO_FLUSHControls per-write durability of thebinarystorage ada
Related Skills
node-connect
352.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
AGENTS
352.2kExtensions Boundary This directory contains bundled plugins. Treat it as the same boundary that third-party plugins see. Public Contracts - Docs: - `docs/plugins/building-plugins.md` - `do
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
