Agentfactory
The open-source software factory — multi-agent fleet management for coding agents
Install / Use
/learn @RenseiAI/AgentfactoryQuality Score
Category
Development & EngineeringSupported Platforms
README
Rensei AI AgentFactory
The open-source software factory — multi-agent fleet management for coding agents.
AgentFactory turns your issue backlog into shipped code. It orchestrates a fleet of coding agents (Claude, Codex, Spring AI, or any A2A-compatible agent) through an automated pipeline: development, QA, and acceptance — like an assembly line for software.
Packages
| Package | npm | Description |
|---------|-----|-------------|
| @renseiai/agentfactory | @renseiai/agentfactory | Core orchestrator, provider abstraction, crash recovery |
| @renseiai/plugin-linear | @renseiai/plugin-linear | Linear issue tracker integration |
| @renseiai/agentfactory-server | @renseiai/agentfactory-server | Redis work queue, session storage, worker pool |
| @renseiai/agentfactory-cli | @renseiai/agentfactory-cli | CLI tools: orchestrator, workers, Linear CLI (af-linear) |
| @renseiai/agentfactory-nextjs | @renseiai/agentfactory-nextjs | Next.js route handlers, webhook processor, middleware |
| @renseiai/agentfactory-dashboard | @renseiai/agentfactory-dashboard | Fleet management dashboard UI |
| @renseiai/agentfactory-mcp-server | @renseiai/agentfactory-mcp-server | MCP server exposing fleet capabilities to external clients |
| @renseiai/agentfactory-code-intelligence | @renseiai/agentfactory-code-intelligence | Tree-sitter AST parsing, BM25 search, incremental indexing |
| @renseiai/create-agentfactory-app | @renseiai/create-agentfactory-app | Project scaffolding tool |
Quick Start
One-click deploy (fastest)
Deploy the dashboard with a single click — no local setup required:
| Platform | Deploy | Redis |
|----------|--------|-------|
| Vercel | | Add Vercel KV or Upstash after deploy |
| Railway |
| Bundled automatically |
See the dashboard template for full setup instructions.
Create a new project (recommended for customization)
npx @renseiai/create-agentfactory-app my-agent
cd my-agent
cp .env.example .env.local # Fill in LINEAR_ACCESS_TOKEN
pnpm install && pnpm dev # Start webhook server
pnpm worker # Start local worker (in another terminal)
Webhook Server (Next.js)
For production use, AgentFactory provides a webhook server that receives Linear events and dispatches agents:
// src/lib/config.ts
import { createAllRoutes, createDefaultLinearClientResolver } from '@renseiai/agentfactory-nextjs'
export const routes = createAllRoutes({
linearClient: createDefaultLinearClientResolver(),
})
// src/app/webhook/route.ts
import { routes } from '@/lib/config'
export const POST = routes.webhook.POST
export const GET = routes.webhook.GET
Spawn an agent on a single issue
import { createOrchestrator } from '@renseiai/agentfactory'
const orchestrator = createOrchestrator({
maxConcurrent: 3,
// Default: '../{repoName}.wt/' (sibling directory)
})
// Process a single issue
await orchestrator.spawnAgentForIssue('PROJ-123')
await orchestrator.waitForAll()
Process your entire backlog
const orchestrator = createOrchestrator({
project: 'MyProject',
maxConcurrent: 3,
})
const result = await orchestrator.run()
console.log(`Spawned ${result.agents.length} agents`)
await orchestrator.waitForAll()
Use the CLI
# Process backlog issues from a project
npx af-orchestrator --project MyProject --max 3
# Process a single issue
npx af-orchestrator --single PROJ-123
# Preview what would be processed
npx af-orchestrator --project MyProject --dry-run
Linear CLI
# Get issue details
npx af-linear get-issue PROJ-123
# List backlog issues for a project
npx af-linear list-backlog-issues --project "MyProject"
# Update issue status
npx af-linear update-issue PROJ-123 --state "Finished"
# Create a comment
npx af-linear create-comment PROJ-123 --body "Work complete"
Architecture
┌─────────────────────────────────────────────────┐
│ Orchestrator │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Agent 1 │ │ Agent 2 │ │ Agent 3 │ │
│ │ (Claude) │ │ (Codex) │ │ (Claude) │ │
│ │ DEV: #123 │ │ QA: #120 │ │ DEV: #125 │ │
│ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘ │
│ │ │ │ │
│ ┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐ │
│ │ Worktree │ │ Worktree │ │ Worktree │ │
│ │ repo.wt/ │ │ repo.wt/ │ │ repo.wt/ │ │
│ │ #123 │ │ #120 │ │ #125 │ │
│ └───────────┘ └───────────┘ └───────────┘ │
└─────────────────────────────────────────────────┘
│ │
┌────┴────┐ ┌────┴────┐
│ Linear │ │ Git │
│ API │ │ Repo │
└─────────┘ └─────────┘
Provider Abstraction
AgentFactory supports multiple coding agent providers through a unified interface:
interface AgentProvider {
readonly name: 'claude' | 'codex' | 'amp' | 'spring-ai' | 'a2a'
spawn(config: AgentSpawnConfig): AgentHandle
resume(sessionId: string, config: AgentSpawnConfig): AgentHandle
}
interface AgentHandle {
sessionId: string | null
stream: AsyncIterable<AgentEvent>
injectMessage(text: string): Promise<void>
stop(): Promise<void>
}
Spring AI support means enterprise Java teams can orchestrate Spring AI-based agents in the same fleet as Claude and Codex agents, with the same pipeline, governance, and cost tracking.
Provider is selected via environment variables:
AGENT_PROVIDER=claude # Global default
AGENT_PROVIDER_QA=codex # Per-work-type override
AGENT_PROVIDER_SOCIAL=spring-ai # Per-project override
Agent-to-Agent Protocol (A2A)
AgentFactory implements the A2A protocol (v0.3.0), operating as both client and server.
- Client mode: Invoke remote A2A agents (Spring AI or any A2A-compliant agent) as part of an orchestrated fleet. Route specific work types to external agents via environment config.
- Server mode: Expose fleet capabilities via
/.well-known/agent-card.jsondiscovery and JSON-RPC task submission. Any A2A-aware tool can submit work to the fleet.
This enables mixed fleets across languages, frameworks, and infrastructure with no coupling.
MCP Server
Fleet capabilities are exposed as an MCP server. Any MCP-aware client (Claude Desktop, Spring AI apps, IDE agents) can interact with the fleet.
Tools:
| Tool | Description |
|------|-------------|
| submit-task | Submit a new task to the fleet |
| get-task-status | Check status of a running task |
| list-fleet | List active agents and their assignments |
| get-cost-report | Retrieve cost tracking data |
| stop-agent | Stop a running agent |
| forward-prompt | Send a prompt to a specific agent |
Resources:
| URI | Description |
|-----|-------------|
| fleet://agents | Current fleet state |
| fleet://issues/{id} | Issue progress details |
| fleet://logs/{id} | Agent execution logs |
Transport: Streamable HTTP for remote access, STDIO for local CLI.
Spring AI Bench
AgentFactory agents can be evaluated through Spring AI Bench. The multi-agent pipeline (dev, QA, acceptance) improves benchmark reliability over single-agent runs.
Work Types
Issues flow through work stations based on their status:
| Status | Work Type | Agent Role |
|--------|-----------|------------|
| Backlog | development | Implement the feature/fix |
| Started | inflight | Continue in-progress work |
| Finished | qa | Validate implementation |
| Delivered | acceptance | Final acceptance testing |
| Rejected | refinement | Address feedback |
Crash Recovery
AgentFactory includes built-in crash recovery:
- Heartbeat monitoring — agents send periodic health signals
- State persistence — session state saved to
.agent/directory - Automatic resume — crashed agents are detected and restarted
- Recovery limits — configurable max recovery attempts
Inactivity Timeout
Agents are monitored for inactivity:
const orchestrator = createOrchestrator({
inactivityTimeoutMs: 300000, // 5 minutes default
maxSessionTimeoutMs: 7200000, // 2 ho
