MemOS
AI memory OS for LLM and Agent systems(moltbot,clawdbot,openclaw), enabling persistent Skill memory for cross-task skill reuse and evolution.
Install / Use
/learn @MemTensor/MemOSREADME
🦞 Enhanced OpenClaw with MemOS Plugin

🦞 Your lobster now has a working memory system — choose Cloud or Local to get started.
☁️ Cloud Plugin — Hosted Memory Service
- 72% lower token usage — intelligent memory retrieval instead of loading full chat history
- Multi-agent memory sharing — multi-instance agents share memory via same user_id, automatic context handoff
Get your API key: MemOS Dashboard
Full tutorial → MemOS-Cloud-OpenClaw-Plugin
🧠 Local Plugin — 100% On-Device Memory
- Zero cloud dependency — all data stays on your machine, persistent local SQLite storage
- Hybrid search + task & skill evolution — FTS5 + vector search, auto task summarization, reusable skills that self-upgrade
- Multi-agent collaboration + Memory Viewer — memory isolation, skill sharing, full web dashboard with 7 management pages
🌐 Homepage · 📖 Documentation · 📦 NPM
📌 MemOS: Memory Operating System for AI Agents
MemOS is a Memory Operating System for LLMs and AI agents that unifies store / retrieve / manage for long-term memory, enabling context-aware and personalized interactions with KB, multi-modal, tool memory, and enterprise-grade optimizations built in.
Key Features
- Unified Memory API: A single API to add, retrieve, edit, and delete memory—structured as a graph, inspectable and editable by design, not a black-box embedding store.
- Multi-Modal Memory: Natively supports text, images, tool traces, and personas, retrieved and reasoned together in one memory system.
- Multi-Cube Knowledge Base Management: Manage multiple knowledge bases as composable memory cubes, enabling isolation, controlled sharing, and dynamic composition across users, projects, and agents.
- Asynchronous Ingestion via MemScheduler: Run memory operations asynchronously with millisecond-level latency for production stability under high concurrency.
- Memory Feedback & Correction: Refine memory with natural-language feedback—correcting, supplementing, or replacing existing memories over time.
News
-
2026-03-08 · 🦞 MemOS OpenClaw Plugin — Cloud & Local
Official OpenClaw memory plugins launched. Cloud Plugin: hosted memory service with 72% lower token usage and multi-agent memory sharing (MemOS-Cloud-OpenClaw-Plugin). Local Plugin (v1.0.0): 100% on-device memory with persistent SQLite, hybrid search (FTS5 + vector), task summarization & skill evolution, multi-agent collaboration, and a full Memory Viewer dashboard. -
2025-12-24 · 🎉 MemOS v2.0: Stardust (星尘) Release
<details> <summary>✨ <b>New Features</b></summary>
Comprehensive KB (doc/URL parsing + cross-project sharing), memory feedback & precise deletion, multi-modal memory (images/charts), tool memory for agent planning, Redis Streams scheduling + DB optimizations, streaming/non-streaming chat, MCP upgrade, and lightweight quick/full deployment.Knowledge Base & Memory
- Added knowledge base support for long-term memory from documents and URLs
Feedback & Memory Management
- Added natural language feedback and correction for memories
- Added memory deletion API by memory ID
- Added MCP support for memory deletion and feedback
Conversation & Retrieval
- Added chat API with memory-aware retrieval
- Added memory filtering with custom tags (Cloud & Open Source)
Multimodal & Tool Memory
- Added tool memory for tool usage history
- Added image memory support for conversations and documents
Data & Infrastructure
- Upgraded database for better stability and performance
Scheduler
- Rebuilt task scheduler with Redis Streams and queue isolation
- Added task priority, auto-recovery, and quota-based scheduling
Deployment & Engineering
- Added lightweight deployment with quick and full modes
Memory Scheduling & Updates
- Fixed legacy scheduling API to ensure correct memory isolation
- Fixed memory update logging to show new memories correctly
-
2025-08-07 · 🎉 MemOS v1.0.0 (MemCube) Release First MemCube release with a word-game demo, LongMemEval evaluation, BochaAISearchRetriever integration, NebulaGraph support, improved search capabilities, and the official Playground launch.
<details> <summary>✨ <b>New Features</b></summary>Playground
- Expanded Playground features and algorithm performance.
MemCube Construction
- Added a text game demo based on the MemCube novel.
Extended Evaluation Set
- Added LongMemEval evaluation results and scripts.
Plaintext Memory
- Integrated internet search with Bocha.
- Added support for Nebula database.
- Added contextual understanding for the tree-structured plaintext memory search interface.
KV Cache Concatenation
- Fixed the concat_cache method.
Plaintext Memory
- Fixed Nebula search-related issues.
-
2025-07-07 · 🎉 MemOS v1.0: Stellar (星河) Preview Release A SOTA Memory OS for LLMs is now open-sourced.
-
2025-07-04 · 🎉 MemOS Paper Release MemOS: A Memory OS for AI System is available on arXiv.
-
2024-07-04 · 🎉 Memory3 Model Release at WAIC 2024 The Memory3 model, featuring a memory-layered architecture, was unveiled at the 2024 World Artificial Intelligence Conference.
🚀 Quickstart Guide
☁️ 1、Cloud API (Hosted)
Get API Key
- Sign up on the MemOS dashboard
- Go to API Keys and copy your key
Next Steps
- MemOS Cloud Getting Started Connect to MemOS Cloud and enable memory in minutes.
- MemOS Cloud Platform Explore the Cloud dashboard, features, and workflows.
🖥️ 2、Self-Hosted (Local/Private)
- Get the repository.
git clone https://github.com/MemTensor/MemOS.git cd MemOS pip install -r ./docker/requirements.txt - Configure
docker/.env.exampleand copy toMemOS/.env
- The
OPENAI_API_KEY,MOS_EMBEDDER_API_KEY,MEMRADER_API_KEYand others can be applied for through [BaiLian](https://bailian.console.aliyun.com/?spm=a2c4g.11186623.0
