Acontext
Agent Skills as a Memory Layer
Install / Use
/learn @memodb-io/AcontextQuality Score
Category
Development & EngineeringSupported Platforms
README
What is Acontext?
Acontext is an open-source skill memory layer for AI agents. It automatically captures learnings from agent runs and stores them as agent skill files — files you can read, edit, and share across agents, LLMs, and frameworks.
If you want the agent you build to learn from its mistakes and reuse what worked — without opaque memory polluting your context — give Acontext a try.
Skill is All You Need
Agent memory is getting increasingly complicated🤢 — hard to understand, hard to debug, and hard for users to inspect or correct. Acontext takes a different approach: if agent skills can represent every piece of knowledge an agent needs as simple files, so can the memory.
- Acontext builds memory in the agent skills format, so everyone can see and understand what the memory actually contains.
- Skill is Memory, Memory is Skill. Whether a skill comes from one you downloaded from Clawhub or one you created yourself, Acontext can follow it and evolve it over time.
The Philosophy of Acontext
- Plain file, any framework — Skill memories are Markdown files. Use them with LangGraph, Claude, AI SDK, or anything that reads files. No embeddings, no API lock-in. Git, grep, and mount to the sandbox.
- You design the structure — Attach more skills to define the schema, naming, and file layout of the memory. For example: one file per contact, one per project by uploading a working context skill.
- Progressive disclosure, not search — The agent can use
get_skillandget_skill_fileto fetch what it needs. Retrieval is by tool use and reasoning, not semantic top-k. - Download as ZIP, reuse anywhere — Export skill files as ZIP. Run locally, in another agent, or with another LLM. No vendor lock-in; no re-embedding or migration step.
How It Works
Store — How skills get memorized?
flowchart LR
A[Session messages] --> C[Task complete/failed]
C --> D[Distillation]
D --> E[Skill Agent]
E --> F[Update Skills]
- Session messages — Conversation (and optionally tool calls, artifacts) is the raw input. Tasks are extracted from the message stream automatically (or inferred from explicit outcome reporting).
- Task complete or failed — When a task is marked done or failed (e.g. by agent report or automatic detection), that outcome is the trigger for learning.
- Distillation — An LLM pass infers from the conversation and execution trace what worked, what failed, and user preferences.
- Skill Agent — Decides where to store (existing skill or new) and writes according to your
SKILL.mdschema. - Update Skills — Skills are updated. You define the structure in
SKILL.md; the system does extraction, routing, and writing.
Recall — How the agent uses skills on the next run
flowchart LR
E[Any Agent] --> F[list_skills/get_skill]
F --> G[Appear in context]
Give your agent Skill Content Tools (get_skill, get_skill_file). The agent decides what it needs, calls the tools, and gets the skill content. No embedding search — progressive disclosure, agent in the loop.
🪜 Use It to Improve your Agent
Claude Code:
Read https://acontext.io/SKILL.md and follow the instructions to install and configure Acontext for Claude Code
OpenClaw:
Read https://acontext.io/SKILL.md and follow the instructions to install and configure Acontext for OpenClaw
🚀 Step-by-step Quickstart
Connect to Acontext
- Go to Acontext.io, claim your free credits.
- Go through a one-click onboarding to get your API Key (starts with
sk-ac)
We have an acontext-cli to help you do a quick proof-of-concept. Download it first in your terminal:
curl -fsSL https://install.acontext.io | sh
You should have docker installed and an OpenAI API Key to start an Acontext backend on your computer:
mkdir acontext_server && cd acontext_server
acontext server up
Make sure your LLM has the ability to call tools. By default, Acontext will use
gpt-4.1.
acontext server up will create/use .env and config.yaml for Acontext, and create a db folder to persist data.
Once it's done, you can access the following endpoints:
- Acontext API Base URL: http://localhost:8029/api/v1
- Acontext Dashboard: http://localhost:3000/
Install SDKs
We're maintaining Python and Typescript
SDKs. The snippets below are using Python.
Click the doc link to see TS SDK Quickstart.
pip install acontext
Initialize Client
import os
from acontext import AcontextClient
# For cloud:
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# For self-hosted:
client = AcontextClient(
base_url="http://localhost:8029/api/v1",
api_key="sk-ac-your-root-api-bearer-token",
)
Skill Memory in Action
Create a learning space, attach a session, and let the agent learn — skills are written as Markdown files automatically.
from acontext import AcontextClient
client = AcontextClient(api_key="sk-ac-...")
# Create a learning space and attach a session
space = client.learning_spaces.create()
session = client.sessions.create()
client.learning_spaces.learn(space.id, session_id=session.id)
# Run your agent, store messages — when tasks complete, learning runs automatically
client.sessions.store_message(session.id, blob={"role": "user", "content": "My name is Gus"})
client.sessions.store_message(session.id, blob={"role": "assistant", "content": "Hi Gus! How can I help you today?"})
# ... agent runs ...
# List learned skills (Markdown files)
client.learning_spaces.wait_for_learning(space.id, session_id=session.id)
skills = client.learning_spaces.list_skills(space.id)
# Download all skill files to a local directory
for skill in skills:
client.skills.download(skill_id=skill.id, path=f"./skills/{skill.name}")
wait_for_learningis a blocking helper for demo purposes. In production, task extraction and learning run in the background automatically — your agent never waits.
More Features
- Context Engineering — Compress context with summaries and edit strategies
- Disk — Virtual, persistent filesystem for agents
- Sandbox — Isolated code execution with bash, Python, and mountable skills
- Agent Tools — Disk tools, sandbox tools, and skill tools for LLM function calling
🧐 Use Acontext to Build Agents
Download end-to-end scripts with acontext:
Python
acontext create my-proj --template-path "python/openai-basic"
More examples on Python:
python/openai-agent-basic: openai agent sdk templatepython/openai-agent-artifacts: agent can edit and download artifactspython/claude-agent-sdk: claude agent sdk withClaudeAgentStoragepython/agno-basic: agno framework templatepython/smolagents-basic: smolagents (huggingface) templatepython/interactive-agent-skill: interactive sandbox with mountable agent skills
Typescript
acontext create my-proj --template-path "typescript/openai-basic"
More examples on Typescript:
typescript/vercel-ai-basic: agent in @vercel/ai-sdktypescript/claude-agent-sdk: claude agent sdk withClaudeAgentStoragetypescript/interactive-agent-skill: interactive sandbox with mountable agent skills
[!NOTE]
Check our example repo for more templates: Acontext-Examples.
Related Skills
node-connect
329.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
81.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
329.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
81.1kCommit, push, and open a PR
