SkillAgentSearch skills...

ReMe

ReMe: Memory Management Kit for Agents - Remember Me, Refine Me.

Install / Use

/learn @agentscope-ai/ReMe
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="docs/_static/figure/reme_logo.png" alt="ReMe Logo" width="50%"> </p> <p align="center"> <a href="https://pypi.org/project/reme-ai/"><img src="https://img.shields.io/badge/python-3.10+-blue" alt="Python Version"></a> <a href="https://pypi.org/project/reme-ai/"><img src="https://img.shields.io/pypi/v/reme-ai.svg?logo=pypi" alt="PyPI Version"></a> <a href="https://pepy.tech/project/reme-ai/"><img src="https://img.shields.io/pypi/dm/reme-ai" alt="PyPI Downloads"></a> <a href="https://github.com/agentscope-ai/ReMe"><img src="https://img.shields.io/github/commit-activity/m/agentscope-ai/ReMe?style=flat-square" alt="GitHub commit activity"></a> </p> <p align="center"> <a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache--2.0-black" alt="License"></a> <a href="./README.md"><img src="https://img.shields.io/badge/English-Click-yellow" alt="English"></a> <a href="./README_ZH.md"><img src="https://img.shields.io/badge/简体中文-点击查看-orange" alt="简体中文"></a> <a href="https://github.com/agentscope-ai/ReMe"><img src="https://img.shields.io/github/stars/agentscope-ai/ReMe?style=social" alt="GitHub Stars"></a> <a href="https://deepwiki.com/agentscope-ai/ReMe"><img src="https://img.shields.io/badge/DeepWiki-Ask_Devin-navy.svg" alt="DeepWiki"></a> </p> <p align="center"> <a href="https://trendshift.io/repositories/20528" target="_blank"><img src="https://trendshift.io/api/badge/repositories/20528" alt="agentscope-ai%2FReMe | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> </p> <p align="center"> <strong>A memory management toolkit for AI agents — Remember Me, Refine Me.</strong><br> </p>

For the older version, please refer to the 0.2.x documentation.


🧠 ReMe is a memory management framework designed for AI agents, providing both file-based and vector-based memory systems.

It tackles two core problems of agent memory: limited context window (early information is truncated or lost in long conversations) and stateless sessions (new sessions cannot inherit history and always start from scratch).

ReMe gives agents real memory — old conversations are automatically compacted, important information is persistently stored, and relevant context is automatically recalled in future interactions.

ReMe achieves state-of-the-art results on the LoCoMo and HaluMem benchmarks; see the Experimental results.

<details> <summary><b>What you can do with ReMe</b></summary> <br>
  • Personal assistant: Provide long-term memory for agents like CoPaw, remembering user preferences and conversation history.
  • Coding assistant: Record code style preferences and project context, maintaining a consistent development experience across sessions.
  • Customer service bot: Track user issue history and preference settings for personalized service.
  • Task automation: Learn success/failure patterns from historical tasks to continuously optimize execution strategies.
  • Knowledge Q&A: Build a searchable knowledge base with semantic search and exact matching support.
  • Multi-turn dialogue: Automatically compress long conversations while retaining key information within limited context windows.
</details>

📁 File-based memory system (ReMeLight)

Memory as files, files as memory.

Treat memory as files — readable, editable, and copyable. CoPaw integrates long-term memory and context management by inheriting from ReMeLight.

| Traditional memory system | File-based ReMe | |---------------------------|----------------------| | 🗄️ Database storage | 📝 Markdown files | | 🔒 Opaque | 👀 Always readable | | ❌ Hard to modify | ✏️ Directly editable | | 🚫 Hard to migrate | 📦 Copy to migrate |

working_dir/
├── MEMORY.md              # Long-term memory: persistent info such as user preferences
├── memory/
│   └── YYYY-MM-DD.md      # Daily journal: automatically written after each conversation
├── dialog/                # Raw conversation records: full dialog before compression
│   └── YYYY-MM-DD.jsonl   # Daily conversation messages in JSONL format
└── tool_result/           # Cache for long tool outputs (auto-managed, expired entries auto-cleaned)
    └── <uuid>.txt

Core capabilities

ReMeLight is the core class of the file-based memory system. It provides full memory management capabilities for AI agents:

<table> <tr><th>Category</th><th>Method</th><th>Function</th><th>Key components</th></tr> <tr><td rowspan="4">Context Management</td><td><code>check_context</code></td><td>📊 Check context size</td><td><a href="reme/memory/file_based/components/context_checker.py">ContextChecker</a> — checks whether context exceeds thresholds and splits messages</td></tr> <tr><td><code>compact_memory</code></td><td>📦 Compact history into summary</td><td><a href="reme/memory/file_based/components/compactor.py">Compactor</a> — ReActAgent that generates structured context summaries</td></tr> <tr><td><code>compact_tool_result</code></td><td>✂️ Compact long tool outputs</td><td><a href="reme/memory/file_based/components/tool_result_compactor.py">ToolResultCompactor</a> — truncates long tool outputs and stores them in <code>tool_result/</code> while keeping file references in messages</td></tr> <tr><td><code>pre_reasoning_hook</code></td><td>🔄 Pre-reasoning hook</td><td><code>compact_tool_result</code> + <code>check_context</code> + <code>compact_memory</code> + <code>summary_memory</code> (async)</td></tr> <tr><td rowspan="2">Long-term Memory</td><td><code>summary_memory</code></td><td>📝 Persist important memory to files</td><td><a href="reme/memory/file_based/components/summarizer.py">Summarizer</a> — ReActAgent + file tools (<code>read</code> / <code>write</code> / <code>edit</code>)</td></tr> <tr><td><code>memory_search</code></td><td>🔍 Semantic memory search</td><td><a href="reme/memory/file_based/tools/memory_search.py">MemorySearch</a> — hybrid retrieval with vectors + BM25</td></tr> <tr><td rowspan="2">Session Memory</td><td><code>get_in_memory_memory</code></td><td>💾 Create in-session memory instance</td><td>Returns ReMeInMemoryMemory with dialog_path configured for persistence</td></tr> <tr><td><code>await_summary_tasks</code></td><td>⏳ Wait for async summary tasks</td><td>Block until all background summary tasks complete</td></tr> <tr><td>-</td><td><code>start</code></td><td>🚀 Start memory system</td><td>Initialize file storage, file watcher, and embedding cache; clean up expired tool result files</td></tr> <tr><td>-</td><td><code>close</code></td><td>📕 Shutdown and cleanup</td><td>Clean up tool result files, stop file watcher, and persist embedding cache</td></tr> </table>

🚀 Quick start

Installation

Install from source:

git clone https://github.com/agentscope-ai/ReMe.git
cd ReMe
pip install -e ".[light]"

Update to the latest version:

git pull
pip install -e ".[light]"

Environment variables

ReMeLight uses environment variables to configure the embedding model and storage backends:

| Variable | Description | Example | |----------------------|-------------------------------|-----------------------------------------------------| | LLM_API_KEY | LLM API key | sk-xxx | | LLM_BASE_URL | LLM base URL | https://dashscope.aliyuncs.com/compatible-mode/v1 | | EMBEDDING_API_KEY | Embedding API key (optional) | sk-xxx | | EMBEDDING_BASE_URL | Embedding base URL (optional) | https://dashscope.aliyuncs.com/compatible-mode/v1 |

Python usage

import asyncio

from reme.reme_light import ReMeLight


async def main():
    # Initialize ReMeLight
    reme = ReMeLight(
        default_as_llm_config={"model_name": "qwen3.5-35b-a3b"},
        # default_embedding_model_config={"model_name": "text-embedding-v4"},
        default_file_store_config={"fts_enabled": True, "vector_enabled": False},
        enable_load_env=True,
    )
    await reme.start()

    messages = [...]  # List of conversation messages

    # 1. Check context size (token counting, determine if compaction is needed)
    messages_to_compact, messages_to_keep, is_valid = await reme.check_context(
        messages=messages,
        memory_compact_threshold=90000,  # Threshold to trigger compaction (tokens)
        memory_compact_reserve=10000,  # Token count to reserve for recent messages
    )

    # 2. Compact conversation history into a structured summary
    summary = await reme.compact_memory(
        messages=messages,
        previous_summary="",
        max_input_length=128000,  # Model context window (tokens)
        compact_ratio=0.7,  # Trigger compaction when exceeding max_input_length * 0.7
        language="zh",  # Summary language (e.g., "zh" / "")
    )

    # 3. Compact long tool outputs (prevent tool results from blowing up context)
    messages = await reme.compact_tool_result(messages)

    # 4. Pre-reasoning hook (auto compact tool results + check context + generate summaries)
    processed_messages, compressed_summary = await reme.pre_reasoning_hook(
        messages=messages,
        system_prompt="You are a helpful AI assistant.",
        compressed_summary="",
        max_input_length=128000,
        compact_ratio=0.7,
        memory_compact_reserve=10000,
        enable_tool_result_compact=True,
        tool_result_compact_keep_n=3,
    )

    # 5. Persist important memory to files (writes to memory/YYYY-MM-DD.md)
    summary_result = await reme.summary_mem

Related Skills

View on GitHub
GitHub Stars2.5k
CategoryDevelopment
Updated2h ago
Forks204

Languages

Python

Security Score

100/100

Audited on Mar 30, 2026

No findings