Archipelago
Harness for running and evaluating AI agents against RL environments
Install / Use
/learn @Mercor-Intelligence/ArchipelagoREADME
Archipelago
<a href="https://arxiv.org/abs/2601.14242"><img src="https://img.shields.io/badge/📝-Paper-b31b1b"></a> <a href="https://huggingface.co/datasets/mercor/apex-agents"><img src="https://img.shields.io/badge/🤗-Data-yellow"></a> <a href="http://mercor.com/blog/introducing-apex-agents"><img src="https://img.shields.io/badge/📰-Blog-0ea5e9"></a> <a href="mailto:apex@mercor.com"><img src="https://img.shields.io/badge/✉️-Contact-green"></a>
Archipelago is a system for running and evaluating AI agents against MCP applications. It consists of three main components:
- Environment: Headless environment that exposes an MCP gateway
- Agents: Extensible agent runner with a registry of configurable agent implementations
- Grading: Grades agent performance by comparing before/after snapshots (formerly "Verifier")
All components run in Docker containers.
The environment is meant to be run independently as a sandbox, and then an LLM agent connects to the exposed MCP server. The agents runner spawns and manages environment sandboxes automatically.
Table of Contents
Quick Start: Run Your First Task
Estimated time: 30-60 minutes for first run
This quick start walks you through running a single task end-to-end using the provided example.
Prerequisites
- Docker Desktop
- Python 3.13
- UV
- LLM API key (Anthropic, OpenAI, or Gemini)
1. Set Up Environment Variables
cd archipelago
# Copy example env files
cp environment/.env.example environment/.env
cp agents/.env.example agents/.env
cp grading/.env.example grading/.env
# Edit agents/.env and grading/.env with your LLM API key (at least one required):
# ANTHROPIC_API_KEY=sk-ant-...
# or OPENAI_API_KEY=sk-...
# or GOOGLE_API_KEY=...
# The environment/.env can be left as-is for local development
2. Run an Example
We provide two examples:
Option A: HuggingFace Benchmark Task (Recommended)
Run tasks from the mercor/apex-agents benchmark dataset with 480 professional services tasks.
cd examples/hugging_face_task
./run.sh
See examples/hugging_face_task/README.md for details.
Option B: Simple Task
A minimal example with a pre-defined task (find a gorilla image in a filesystem).
cd examples/simple_task
./run.sh
See examples/simple_task/README.md for a detailed step-by-step walkthrough.
Both scripts will:
- Start the environment container
- Populate the environment with the world snapshot
- Configure MCP servers
- Run the agent
- Save the final snapshot
- Run grading and display results
3. Check Results
# View grading results
cat ./grades.json | jq '.scoring_results.final_score'
# View agent trajectory
cat ./trajectory.json | jq '.status'
Components
Environment
The Environment is a headless gateway designed to run in a Docker container. It serves as a management layer for LLM agents, providing MCP server orchestration, data population from S3, and state snapshotting.
Features
- MCP Gateway: Hot-swappable gateway that routes requests to configured MCP servers. Supports dynamic reconfiguration of tools and resources.
- Data Management:
- Population: Download data from S3-compatible storage into local subsystems (
/filesystem,/.apps_data). - Snapshots: Create
tar.gzarchives of the environment state and stream them back to the client or upload directly to S3.
- Population: Download data from S3-compatible storage into local subsystems (
- Docker-First: Designed to run as a containerized service with health checks and lifecycle management.
API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| /health | GET | Health check - returns 200 OK if running |
| /docs | GET | FastAPI generated API documentation |
| /apps | POST | Hot-swap MCP gateway configuration |
| /mcp/ | - | MCP server endpoint (after configuration) |
| /data/populate | POST | Download data from S3 into subsystems |
| /data/snapshot | POST | Stream a tar.gz snapshot of environment state |
| /data/snapshot/s3 | POST | Upload snapshot to S3, returns pre-signed URL |
Configuration
The environment is configured via environment variables:
| Variable | Description | Default |
|----------|-------------|---------|
| S3_SNAPSHOTS_BUCKET | S3 bucket for storing snapshots | snapshots |
| S3_SNAPSHOTS_PREFIX | Prefix for snapshot objects in S3 | "" |
| S3_DEFAULT_REGION | AWS region for S3 operations | us-west-2 |
| S3_ACCESS_KEY_ID | AWS access key ID | None |
| S3_SECRET_ACCESS_KEY | AWS secret access key | None |
Example: Configuring MCP Servers
import requests
config = {
"mcpServers": {
"filesystem_server": {
"transport": "stdio",
"command": "python",
"args": ["main.py"],
"cwd": "./mcp_servers/filesystem_server" # Must be a valid path in the container
}
}
}
requests.post("http://localhost:8080/apps", json=config)
After configuration, http://localhost:8080/mcp/ exposes an MCP server that agents can connect to.
For more details, see the Environment README.
Agents
The Agents component provides an extensible framework for running AI agents against environment sandboxes. It uses a registry-based architecture that allows multiple agent implementations with configurable parameters.
Features
- Agent Registry: Pluggable agent implementations (e.g.,
react_toolbelt_agent) that can be extended with custom agents - Configurable Parameters: Each agent type defines its own configuration schema (max steps, timeouts, system prompts, etc.)
- Environment Integration: Spawns and manages environment sandboxes, handling data population, MCP configuration, and snapshotting
- Observability: Built-in logging to multiple backends (Datadog, PostgreSQL, Redis, file)
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Agents Runner │
├─────────────────────────────────────────────────────────────────┤
│ runner/ │
│ ├── main.py Main orchestrator │
│ ├── models.py Data models │
│ ├── agents/ │
│ │ ├── models.py AgentIds, AgentDefn, AgentRunInput │
│ │ ├── registry.py AGENT_REGISTRY mapping │
│ │ └── react_toolbelt_agent/ Default agent implementation │
│ └── utils/ Settings, logging, redis │
└─────────────────────────────────────────────────────────────────┘
│
│ HTTP API (spawned sandbox)
▼
┌─────────────────────────────────────────────────────────────────┐
│ Environment (Sandbox) │
│ POST /data/populate · POST /apps · /mcp/ · POST /snapshot │
└─────────────────────────────────────────────────────────────────┘
Agent Registry
Agents are registered in runner/agents/registry.py. Each agent definition includes:
agent_id: Unique identifier (e.g.,react_toolbelt_agent)agent_impl: The async function that runs the agentagent_config_fields: Schema for configurable parameters
Example: Loop Agent Configuration
AgentDefn(
agent_id=AgentIds.LOOP_AGENT,
agent_impl=loop_agent_run,
agent_config_fields=[
TaskFieldSchema(field_id="max_steps", field_type=TaskFieldType.NUMBER, default_value=100),
TaskFieldSchema(field_id="timeout", field_type=TaskFieldType.NUMBER, default_value=10800),
TaskFieldSchema(field_id="tool_call_timeout", field_type=TaskFieldType.NUMBER, default_value=60),
TaskFieldSchema(field_id="system_prompt", field_type=TaskFieldType.TEXTAREA, required=False),
],
)
Execution Flow
- Receive trajectory ID and fetch agent configuration
- Spawn environment sandbox and wait for health check
- Populate environment with world snapshot and task data
- Configure MCP servers on the environment
- Run agent (connects to environment's
/mcp/endpoint) - Create snapshot and upload to S3
- Report results via webhook
For more details, see the Agents README.
Grading
The Grading system evaluates completed agent trajectories by analyzing what changed and checking performance against criteria.
The system automatically:
- Computes snapshot diffs to identify file changes
- Extracts embedded images (charts, diagrams) from visual artifacts (docs, PDFs, sheets, slides)
- Selects relevant artifacts for each verifier
- Grades against task-specific criteria
- Calculates a final score
Verifier Types
Task-Specific Verifiers: Custom criteria defined per task
output: Grades based on file changes (requires snapshot diff)trajectory: Grades based on agent's message history [COMING SOON]value: Grades based on extracted values [COMING SOON]
Inputs
trajectory_id: Trajectory identifier for saving grades
trajectory: Complete trajectory from the agent runner (AgentTrajectoryOutput)
grading_config: Grading configuration
{
"grading_run_id": "gr_abc123",
"model": "anthropic/claude-3-5-sonnet-20241022",
"extra_args": { "temperature": 0.7 },
"verifiers": [
{
Related Skills
node-connect
344.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
96.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
344.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
344.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
