Maestro
The Maestro App Factory: a highly-opinionated multi-agent orchestration tool for app development that emulates the workflow of high-functioning human development teams using AI agents
Install / Use
/learn @SnapdragonPartners/MaestroREADME

The Maestro App Factory™
Maestro is a tool that uses AI to write full applications in a disciplined way that reflects good software engineering principles.
In some ways, it's an agent orchestration tool. But unlike most others, Maestro bakes in structure, workflow, and opinions drawn from real-world experience managing large software projects. The goal is production-ready apps, not just code snippets.
The big idea behind Maestro is that since LLMs are trained on and exhibit human behaviors it makes sense to organize them to operate like the most high performing human teams rather than relying on a single model or agent no matter how good.
Share Maestro

❤️ Support This Project
This project is developed and actively maintained by Snapdragon Partners. Tokens for developing Maestro are expensive but we're keeping the community version of Maestro free. If you like Maestro, any support you can provide via GitHub Sponsors would be appreciated!
Quickstart
Step 1: Install Maestro via Homebrew, APT, or direct download from releases.
Option A: Homebrew (macOS)
brew install --cask SnapdragonPartners/tap/maestroOption B: Control Panel App (macOS)
A native macOS app is available as a graphical wrapper for the Maestro CLI. Download it from maestro-macos releases. You can still use Homebrew or the CLI directly if you prefer, but the control panel app contains everything you need to use Maestro.
Option C: APT (Debian/Ubuntu)
# Add the Maestro APT repository (one-time setup) curl -fsSL https://snapdragonpartners.github.io/maestro/key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/maestro.gpg echo "deb [signed-by=/usr/share/keyrings/maestro.gpg] https://snapdragonpartners.github.io/maestro stable main" | sudo tee /etc/apt/sources.list.d/maestro.list # Install (or upgrade) sudo apt update && sudo apt install maestroOption D: Direct download Download the binary for your platform from releases and install it somewhere in your path.
Step 2: Provide your API keys for the models you want to use and GitHub. You have two options:
Option A: Environment variables (traditional)
export OPENAI_API_KEY=sk-... export ANTHROPIC_API_KEY=sk-ant-... export GOOGLE_GENAI_API_KEY=AIza... # Optional, for Gemini models export GITHUB_TOKEN=ghp-... # Optional: Ollama for local models (default: http://localhost:11434) export OLLAMA_HOST=http://localhost:11434 # Optional: Enable web search for agents (Google Custom Search) export GOOGLE_SEARCH_API_KEY=AIza... export GOOGLE_SEARCH_CX=... # Your Custom Search Engine IDOption B: Configure via Web UI (easier)
Skip this step entirely and just run Maestro. If any required API keys are missing, Maestro will automatically open a setup page in the Web UI where you can paste your keys into a browser form. Keys are encrypted and stored locally.
Step 3: Create a project directory (projectdir) and switch to it.
mkdir myproject && cd myprojectStep 4: Run Maestro
maestroIf any required API keys are missing, Maestro will launch in setup mode — open the Web UI (default http://localhost:8080) and follow the prompts to enter your keys. Once all keys are configured, Maestro continues startup automatically.
Important: When Maestro generates a password, it is used for both WebUI login and secrets encryption. Record it somewhere safe — if lost, any secrets stored through the WebUI cannot be recovered. To use your own persistent password, set the
MAESTRO_PASSWORDenvironment variable before running Maestro.Step 5: Open the web UI at http://localhost:8080 (you can change this in the config file.)
- Work with the PM to bootstrap your project by uploading a pre-existing spec or starting a PM interview to generate a specification
- View stories, logs, and system metrics
- Monitor agent activity in real-time
- Optionally chat with agents as you watch their progress
Config settings are in <projectdir>/.maestro/config.json.
System Requirements
- Binary: ~15 MB fat binary (Linux & macOS tested; Windows soon)
- Go: Only needed if compiling from source (Go 1.24+)
- Docker: CLI + daemon required
- GitHub: Token with push/PR/merge perms (standard mode only)
- Ollama: Required for airplane mode (local LLMs)
- Resources: Runs comfortably on a personal workstation
Documentation
Much more extensive documentation including configuration settings is available in the Wiki.
Why Maestro?
Much simpler setup than other frameworks: Maestro uses just a single binary and your existing development tools. It comes with preset config and workflow that work out of the box, but can be customized as needed.
Most frameworks require wrestling with Python versions, dependency hell, or complex setup. With Maestro:
- Download the binary (or build from source)
- Provide your API keys as environment variables
- Run Maestro and start building via the web UI
What Model Does Maestro Use?
Maestro provides out-of-box support for Anthropic, Google, and OpenAI models through their official SDKs (so it should support the latest models as soon as they become available.) Maestro also supports open source and open weight models runnning locally through Ollama.
You can mix-and-match models by agent type - in fact, that's the recommended configuration since heterogeneous models often catch errors that models from the same provider may not.
Key Ideas
Agent Roles
-
PM (Product Manager) (singleton):
- Conducts interactive requirements interviews via web UI
- Adapts questions based on user expertise level (non-technical, basic, expert)
- Can read existing codebase to provide context-aware questions
- Generates requirements specifications describing what the user needs
- Iterates with architect for spec approval and refinement
- Does not write technical specs or stories - that's the architect's job
-
Architect (singleton):
- Transforms requirements into technical specifications
- Breaks specs into stories
- Reviews and approves plans
- Enforces principles (DRY, YAGNI, abstraction levels, test coverage)
- Maintains separate conversation contexts for each agent to preserve continuity and avoid contradictory feedback
- Merges PRs
- Does not write code directly
-
Coders (many):
- Pull stories from a queue
- Develop plans, then code
- Must check in periodically
- Run automated tests before completing work
- Submit PRs for architect review
Coders are goroutines that fully terminate and restart between stories. All state (stories, messages, progress, tokens, costs, etc.) is persisted in a SQLite database.
Workflow at a Glance
- PM conducts interactive interview and generates spec (or user provides spec file)
- Architect reviews and approves spec (with iterative feedback if needed)
- Architect breaks spec into stories and dispatches them
- Coders plan, get approval, then implement
- Architect reviews code + tests, merges PRs
- Coders terminate, new ones spawn for new work
If a coder stalls or fails, Maestro automatically retries or reassigns. Questions can bubble up to a human via CLI or web UI.
See the canonical state diagrams for details:
- PM state machine - Interactive spec generation and architect feedback
- Architect state machine - Spec review, story generation, and code oversight
- Coder state machine - Planning, coding, and testing workflow
Tools & Environment
-
GitHub (standard mode) or Gitea (airplane mode):
- Local mirrors for speed
- Tokens for push/PR/merge
- One working clone per coder, deleted when the coder terminates
- In airplane mode, a local Gitea server provides the same PR/merge workflow offline
-
Docker:
- All agents run in Docker containers with security hardening
- Containers run as non-privileged user (1000:1000) for security
- Coders run read-only for planning, read-write for coding
- Provides security isolation and portability
-
Docker Compose:
- Specs requiring external services (PostgreSQL, Redis, etc.) use Docker Compose
- Place a
compose.ymlin your project's.maestro/directory - Coders call
compose_upto start services, which creates a Docker network connecting services to the coder container - Compose stacks are automatically started at the beginning of CODING and TESTING states
- Services are isolated per-agent using project name prefixes (
maestro-<agent-id>) - No technology downgrades needed—if your spec says PostgreSQL, use PostgreSQL
-
Makefiles:
- Used for build, test, lint, run
- Either wrap your existing build tool or override targets in config
- Aggressive lint/test defaults (“turn checks up to 11”)
-
LLMs:
- Supports OpenAI, Anthropic, Google Gemini, and Ollama (local models) via official SDKs
- PM defaults: Claude Opus 4.5 (latest Anthropic flagship for nuanced requirements gathering)
- Architect defaults: GPT-5.2 (latest OpenAI model for reliable code review)
- Coders default: Claude Sonnet 4.5 (latest coding-oriented model)
- All models configurable per-project in config.json
- Rate limiting handled internally via token buckets
- Ollama support: Run local models like Llama 3.2, Qwen, Mistral for
