SimpleAgents
SimpleAgents lets anyone vibe-code LLM agents and ship them production-ready. It’s Rust-first with Python/Node/Go bindings, multi-provider support, YAML workflows, validation, tracing/replay, resilience, structured outputs, and eval-ready tooling built in.
Install / Use
/learn @CraftsMan-Labs/SimpleAgentsREADME
SimpleAgents
Every agentic SaaS is a config.
Define your AI product as a YAML workflow. Run it in Python or TypeScript. Ship today.
Links
- Docs: https://docs.simpleagents.craftsmanlabs.net/
- Playground: https://yamslam.craftsmanlabs.net/playground
Install
pip install simple-agents-py # Python
npm install simple-agents-node # TypeScript / Node
How It Works
- Define your workflow as YAML -- nodes, edges, structured outputs, routing
- Run it with 10 lines of Python or TypeScript
- Ship -- streaming, images, Langfuse/Jaeger observability all work out of the box
Every email classifier, document processor, intake system, interview bot, and support agent is the same pattern: LLM nodes with structured outputs and deterministic routing. SimpleAgents makes that pattern a config file.
Quick Example
workflow.yaml
id: classifier
version: 1.0.0
entry_node: classify
nodes:
- id: classify
node_type:
llm_call:
model: gpt-4.1-mini
messages_path: input.messages
append_prompt_as_user: true
heal: true
config:
output_schema:
type: object
properties:
category:
type: string
enum: [billing, support, sales]
required: [category]
additionalProperties: false
prompt: |
Classify the user message. Return JSON only.
edges: []
run.py
import json, os
from pathlib import Path
from dotenv import load_dotenv
from simple_agents_py import Client
from simple_agents_py.workflow_payload import workflow_execution_request_to_mapping
from simple_agents_py.workflow_request import (
WorkflowExecutionRequest, WorkflowMessage, WorkflowRole,
)
load_dotenv()
client = Client(os.environ["WORKFLOW_PROVIDER"], api_base=os.environ["WORKFLOW_API_BASE"], api_key=os.environ["WORKFLOW_API_KEY"])
req = WorkflowExecutionRequest(
workflow_path=str(Path("workflow.yaml").resolve()),
messages=[WorkflowMessage(role=WorkflowRole.USER, content="I need a refund for order #1234")],
)
result = client.run_workflow(workflow_execution_request_to_mapping(req))
print(json.dumps(result, indent=2))
That's it. Your agentic SaaS is a config.
What You Get
- YAML workflow engine -- classify, route, extract, generate as a graph config
- Python + TypeScript --
pip install/npm install, run with 10 lines - Streaming -- real-time LLM output streaming
- Images -- multimodal input (text + images) in the same workflow
- JSON healing -- auto-fix truncated/malformed LLM JSON output
- Observability -- Langfuse and Jaeger via OpenTelemetry, one env block
- Custom workers -- plug your own code (DB lookups, APIs) into the workflow graph
- Rust core -- blazing fast engine with Python, TypeScript, and WASM bindings
Documentation
- Start here:
docs/WORKFLOW_QUICKSTART.md-- install, create YAML, run in Python/TypeScript - Examples:
docs/EXAMPLES.md - YAML system guide:
docs/YAML_WORKFLOW_SYSTEM.md - Python binding:
docs/BINDINGS_PYTHON.md - Node/TypeScript binding:
docs/BINDINGS_NODE.md - WASM binding:
docs/BINDINGS_WASM.md - Observability (Langfuse/Jaeger):
docs/TRACING_ARCHITECTURE.md - Rust quick start:
docs/QUICKSTART.md - Rust usage:
docs/USAGE.md - Troubleshooting:
docs/TROUBLESHOOTING.md - Development/Contributing:
docs/DEVELOPMENT.md - Docs map:
docs/DOCS_MAP.md
Contributing
- Start with
CONTRIBUTING.mdanddocs/DEVELOPMENT.md. - Follow task-tracking expectations in
TODO.md(andSUBAGENT_TODO.mdfor larger parallel workstreams). - Run relevant test/lint/format/parity commands before opening a PR.
License
- Repository license file:
LICENSE(Apache License 2.0 text). - Package metadata in workspace includes
MIT OR Apache-2.0for crates/bindings where declared.
For redistribution/compliance-sensitive usage, verify root license files and per-package metadata.
