SkillAgentSearch skills...

Mimir

Mimir - Fully open and customizable memory bank with semantic vector search capabilities for locally indexed files (Code Intelligence) and stored memories that are shared across sessions and chat contexts allowing worker agent to learn from errors in past runs. Includes Drag and Drop multi-agent orchestration

Install / Use

/learn @orneryd/Mimir

README

<img width="283" height="380" alt="image" src="https://github.com/user-attachments/assets/f4e3be80-79fe-4e10-b010-9a39b5f70584" />

M.I.M.I.R - Multi-agent Intelligent Memory & Insight Repository

AI-Powered Memory Bank + Task Management Orchestration with Knowledge Graphs

Docker Node.js Neo4j NornicDB MCP License

Official VSCode Extension

Give your AI agents a persistent memory with relationship understanding.

Imagine your AI assistant that can remember every task you've discussed, understand how they relate to each other, and recall relevant context from weeks ago. Mimir makes this possible by combining Neo4j's powerful graph database with AI embeddings and the Model Context Protocol. Your AI doesn't just store isolated facts—it builds a living knowledge graph that grows smarter with every conversation. Perfect for developers managing complex projects where tasks depend on each other, contexts overlap, and you need an AI that truly understands your work.

Mimir is a Model Context Protocol (MCP) server that provides AI assistants (Claude, ChatGPT, etc.) with a persistent graph database to store tasks, context, and relationships. Instead of forgetting everything between conversations, your AI can remember, learn, and build knowledge over time.


📖 Table of Contents


🎯 Why Mimir?

Without Mimir:

  • AI forgets context between conversations
  • No persistent task tracking
  • Can't see relationships between tasks
  • Limited to current conversation context

With Mimir:

  • AI remembers all tasks and context
  • Persistent Neo4j graph database
  • Discovers relationships automatically
  • Multi-agent coordination
  • Semantic search with AI embeddings

Perfect for:

  • Long-term software projects
  • Multi-agent AI workflows
  • Complex task orchestration
  • Knowledge graph building

⚡ Quick Start (3 Steps)

💡 New to Mimir? Check out the 5-minute Quick Start Guide for a step-by-step walkthrough.

🔌 Connecting to IDE? See the IDE Integration Guide for VS Code, Cursor, and Windsurf setup!

🎯 VS Code Users? Try the Dev Container setup for instant environment with zero configuration!

1. Prerequisites

2. Install & Start

# Clone the repository
git clone https://github.com/orneryd/Mimir.git
cd Mimir

# Copy environment template
cp env.example .env

# Start all services (automatically detects your platform)
npm run start
# Or manually: docker compose up -d

That's it! Services will start in the background. The startup script automatically detects your platform (macOS ARM64, Linux, Windows) and uses the optimized docker-compose file.

⚠️ IMPORTANT - Configure Workspace Access:

The ONLY required configuration is HOST_WORKSPACE_ROOT in .env:

# Your main source code directory (default: ~/src)
# This gives Mimir access to your code for file indexing
HOST_WORKSPACE_ROOT=~/src  # ✅ Tilde (~) works automatically!

What this does:

  • Mounts your source directory to the container (default: read-write)
  • You manually choose which folders to index via UI or VSCode plugin
  • Don't panic! Indexing is per-folder and requires your explicit action
  • Tilde expansion: ~/src automatically expands to your home directory (e.g., /Users/john/src)

For read-only access, edit docker-compose.yml:

volumes:
  - ${HOST_WORKSPACE_ROOT:-~/src}:${WORKSPACE_ROOT:-/workspace}:ro  # Add :ro flag

3. Verify It's Working

# Check that all services are running
npm run status
# Or manually: docker compose ps

# View logs
npm run logs

# Open Mimir Web UI (includes file indexing, orchestration studio, and portal)
# Visit: http://localhost:9042

# Open Neo4j Browser (default password: "password")
# Visit: http://localhost:7474

# Check MCP server health
curl http://localhost:9042/health

Available Commands:

  • npm run start - Start all services
  • npm run stop - Stop all services
  • npm run restart - Restart services
  • npm run logs - View logs
  • npm run status - Check service status
  • npm run rebuild - Full rebuild without cache

See scripts/START_SCRIPT.md for more details.

You're ready! The Mimir Web UI is now available at http://localhost:9042

What you get:

  • 🎯 Portal: Main hub with navigation and file indexing http://localhost:9042/portal
  • 🎨 Orchestration Studio: Visual workflow builder (beta) http://localhost:9042/studio
  • 🔌 MCP API: RESTful API at http://localhost:9042/mcp
  • 💬 Chat API: OpenAI-compatible endpoints at http://localhost:9042/v1/chat/completions and /v1/embeddings

⚙️ Configuration

Environment Variables

Edit the .env file to customize your setup. Most users can use the defaults.

Core Settings (Required)

# Neo4j Database
NEO4J_PASSWORD=password          # Change in production!

# Docker Workspace Mount
HOST_WORKSPACE_ROOT=~/src        # Your main workspace area

LLM Configuration (For Chat API & Orchestration)

# Provider Selection
MIMIR_DEFAULT_PROVIDER=openai                    # Options: openai, copilot, ollama, llama.cpp

# LLM API Configuration  
MIMIR_LLM_API=http://copilot-api:4141           # Base URL (required)
MIMIR_LLM_API_PATH=/v1/chat/completions         # Optional (default: /v1/chat/completions)
MIMIR_LLM_API_MODELS_PATH=/v1/models            # Optional (default: /v1/models)
MIMIR_LLM_API_KEY=dummy-key                     # Optional (use for OpenAI API)

# Model Selection
MIMIR_DEFAULT_MODEL=gpt-4.1                     # Default: gpt-4.1

# Embeddings Configuration
MIMIR_EMBEDDINGS_MODEL=bge-m3        # Default: bge-m3
MIMIR_EMBEDDINGS_API=http://llama-server:8080  # Embeddings endpoint
MIMIR_EMBEDDINGS_API_PATH=/v1/embeddings       # Optional (default: /v1/embeddings)
MIMIR_EMBEDDINGS_DIMENSIONS=1024               # Default: 1024
MIMIR_EMBEDDINGS_CHUNK_SIZE=768                # Default: 768

Provider Options:

  • openai or copilot: OpenAI-compatible endpoints (GitHub Copilot, OpenAI API, or any compatible service)
  • ollama or llama.cpp: Local LLM providers (Ollama or llama.cpp - interchangeable)

Configuration Examples:

Example 1: Copilot API (GitHub Copilot license, recommended for development):

MIMIR_DEFAULT_PROVIDER=openai
MIMIR_LLM_API=http://copilot-api:4141
MIMIR_DEFAULT_MODEL=gpt-4.1
MIMIR_EMBEDDINGS_MODEL=bge-m3
MIMIR_EMBEDDINGS_DIMENSIONS=1024
MIMIR_EMBEDDINGS_CHUNK_SIZE=768

Example 2: Local Ollama (offline, fully local):

MIMIR_DEFAULT_PROVIDER=ollama
MIMIR_LLM_API=http://ollama:11434
MIMIR_DEFAULT_MODEL=qwen2.5-coder
MIMIR_EMBEDDINGS_MODEL=bge-m3

Example 3: OpenAI API (cloud-based, requires API key):

MIMIR_DEFAULT_PROVIDER=openai
MIMIR_LLM_API=https://api.openai.com
MIMIR_LLM_API_PATH=/v1/chat/completions
MIMIR_LLM_API_KEY=sk-...
MIMIR_DEFAULT_MODEL=gpt-4
MIMIR_EMBEDDINGS_MODEL=text-embedding-3-small
MIMIR_EMBEDDINGS_DIMENSIONS=1536

Available Models (Dynamic):

Models are fetched dynamically from your configured LLM provider at runtime. To see available models:

# Query Mimir's models endpoint
curl http://localhost:9042/api/models

# Or query your LLM provider directly
curl $LLM_API_URL/v1/models

All models from the LLM provider's /v1/models endpoint are automatically available - no hardcoded list!

Switching Providers: Change MIMIR_DEFAULT_PROVIDER and MIMIR_LLM_API in .env, then restart:

docker compose restart mimir-server

Existing conversations remain unchanged - the new provider is used for subsequent messages.

Embeddings (Optional - for semantic search)

# Enable vector embeddings for AI semantic search
MIMIR_EMBEDDINGS_ENABLED=true
MIMIR_FEATURE_VECTOR_EMBEDDINGS=true

# Embedding provider (uses same endpoints as LLM by default)
MIMIR_EMBEDDINGS_API=http://llama-server:8080
MIMIR_EMBEDDINGS_MODEL=nomic-embed-text
MIMIR_EMBEDDINGS_DIMENSIONS=1024

Embeddings can use the same endpoint as your LLM, or a separate specialized service (like llama.cpp for embeddings only).

Supported Embedding Models:

  • nomic-embed-text (default - lightweight, 768 dims)
  • bge-m3 (higher quality, 1024 dims)
  • text-embedding-3-small (OpenAI, 1536 dims - requires OpenAI LLM provider)

Advanced Settings (Optional)

# Auto-index Mimir documentation on startup (default: true)
# Allows users to immediately query Mimir's docs via semantic search
MIMIR_AUTO_INDEX_DOCS=true

# Per-agen
View on GitHub
GitHub Stars256
CategoryDevelopment
Updated1d ago
Forks26

Languages

Go

Security Score

85/100

Audited on Mar 26, 2026

No findings