SkillAgentSearch skills...

Agentfield

Framework for AI Backend. Build and run AI agents like microservices - scalable, observable, and identity-aware from day one.

Install / Use

/learn @Agent-Field/Agentfield

README

<div align="center"> <img src="assets/github hero.png" alt="AgentField - The AI Backend" width="100%" />

The AI Backend

Build and scale AI agents like APIs. Deploy, observe, and prove.

AI has outgrown chatbots and prompt orchestrators. Backend agents need backend infrastructure.

Stars License Downloads Last Commit Discord

Docs · Quick Start · Python SDK · Go SDK · TypeScript SDK · REST API · Examples · Discord

</div>

AgentField is an open-source control plane that makes AI agents callable by any service in your stack - frontends, backends, other agents, cron jobs - just like any other API. You write agent logic in Python, Go, or TypeScript. AgentField turns it into production infrastructure: routing, coordination, memory, async execution, and cryptographic audit trails. Every function becomes a REST endpoint. Every agent gets a cryptographic identity. Every decision is traceable.

from agentfield import Agent, AIConfig
from pydantic import BaseModel

app = Agent(
    node_id="claims-processor",
    version="2.1.0",# Canary deploys, A/B testing, blue-green rollouts
    ai_config=AIConfig(model="anthropic/claude-sonnet-4-20250514"),
)

class Decision(BaseModel):
    action: str# "approve", "deny", "escalate"
    confidence: float
    reasoning: str

@app.reasoner(tags=["insurance", "critical"])
async def evaluate_claim(claim: dict) -> dict:

    # Structured AI judgment - returns typed Pydantic output
    decision = await app.ai(
        system="Insurance claims adjuster. Evaluate and decide.",
        user=f"Claim #{claim['id']}: {claim['description']}",
        schema=Decision,
    )

    if decision.confidence < 0.85:
        # Human approval - suspends execution, notifies via webhook, resumes when approved
        await app.pause(
            approval_request_id=f"claim-{claim['id']}",
            approval_request_url=f"https://internal.acme.com/approvals/claim-{claim['id']}",
            expires_in_hours=48,
        )

    # Route to the next agent - traced through the control plane
    await app.call("notifier.send_decision", input={
        "claim_id": claim["id"],
        "decision": decision.model_dump(),
    })

    return decision.model_dump()

app.run()
# This single line exposes: POST /api/v1/execute/claims-processor.evaluate_claim
# The agent auto-registers with the control plane, gets a cryptographic identity, and every
# execution produces a verifiable, tamper-proof audit trail.

What you just saw: app.ai() calls an LLM and returns structured output. app.pause() suspends for human approval. app.call() routes to other agents through the control plane. app.run() auto-exposes everything as REST. Read the full docs →

Quick Start

curl -fsSL https://agentfield.ai/install.sh | bash   # Install CLI
af init my-agent --defaults                            # Scaffold agent
cd my-agent && pip install -r requirements.txt
af server          # Terminal 1 → Dashboard at http://localhost:8080
python main.py     # Terminal 2 → Agent auto-registers
# Call your agent
curl -X POST http://localhost:8080/api/v1/execute/my-agent.demo_echo \
  -H "Content-Type: application/json" \
  -d '{"input": {"message": "Hello!"}}'
<details> <summary><b>Go / TypeScript / Docker</b></summary>
# Go
af init my-agent --defaults --language go && cd my-agent && go run .

# TypeScript
af init my-agent --defaults --language typescript && cd my-agent && npm install && npm run dev

# Docker (control plane only)
docker run -p 8080:8080 agentfield/control-plane:latest

Deployment guide → for Docker Compose, Kubernetes, and production setups.

</details>

What You Get

Build - Python, Go, or TypeScript. Every function becomes a REST endpoint.

  • Reasoners & Skills - @app.reasoner() for AI judgment, @app.skill() for deterministic code
  • Structured AI - app.ai(schema=MyModel) → typed Pydantic/Zod output from any LLM
  • Harness - app.harness("Fix the bug") dispatches multi-turn tasks to Claude Code, Codex, Gemini CLI, or OpenCode
  • Cross-Agent Calls - app.call("other-agent.func") routes through the control plane with full tracing
  • Discovery - app.discover(tags=["ml*"]) finds agents and capabilities across the mesh. tools="discover" lets LLMs auto-invoke them.
  • Memory - app.memory.set() / .get() / .search() - KV + vector search, four scopes, no Redis needed

Run - Production infrastructure for non-deterministic AI.

  • Async Execution - Fire-and-forget with webhooks, SSE streaming, retries. No timeout limits - agents run for hours or days.
  • Human-in-the-Loop - app.pause() suspends execution for human approval. Crash-safe, durable, audited.
  • Canary Deployments - Traffic weight routing, A/B testing, blue-green deploys. Roll out agent versions at 5% → 50% → 100%.
  • Observability - Automatic workflow DAGs, Prometheus /metrics, structured logs, execution timeline.

Govern - IAM for AI agents. Identity, access control, and audit trails - built in.

  • Cryptographic Identity - Every agent gets a W3C DID (decentralized identifier) - not a shared API key. Agents authenticate to each other the way services authenticate with mTLS, but with cryptographic signatures that travel with the agent.
  • Verifiable Credentials - Tamper-proof receipt for every execution. Offline-verifiable: af vc verify audit.json.
  • Policy Enforcement - Tag-based policy gates with cryptographic verification. "Only agents tagged 'finance' can call this" - enforced by infrastructure, not prompts.

See the full production-ready feature set →

<div align="center"> <img src="assets/features-strip.png" alt="90+ Production Features" width="100%" /> </div> <details> <summary><h4 align="center">▼ Click to expand full capabilities</h4></summary>

AI & LLM

| Feature | How | |---|---| | Structured output (Pydantic/Zod) | app.ai(schema=MyModel) | | Multi-turn coding agents | app.harness("task", provider="claude-code") | | LLM auto-discovers agents and tools | app.ai(tools="discover") | | Multimodal (text, image, audio) | app.ai("Describe", image_url="...") | | Streaming responses | app.ai("...", stream=True) | | 100+ LLMs via LiteLLM | AIConfig(model="anthropic/claude-sonnet-4-20250514") | | Temperature, max tokens, format | app.ai(..., temperature=0.2) |

Agent Mesh & Discovery

| Feature | How | |---|---| | Cross-agent calls with tracing | app.call("agent.func", input={...}) | | Discover agents by tag (wildcards) | app.discover(tags=["ml*"]) | | Discover by health status | app.discover(health_status="active") | | Agent routers (namespacing) | AgentRouter(prefix="billing")

View on GitHub
GitHub Stars1.3k
CategoryDevelopment
Updated58m ago
Forks203

Languages

Go

Security Score

100/100

Audited on Apr 4, 2026

No findings