SkillAgentSearch skills...

Dagu

A local-first workflow engine built the way it should be: declarative, file-based, self-contained, air-gapped ready. One binary that scales from laptop to distributed cluster. Your Workflow Operator handles creating and debugging workflows.

Install / Use

/learn @dagucloud/Dagu

README

<div align="center"> <img src="./assets/images/hero-logo.webp" width="480" alt="Dagu Logo"> <p> <a href="https://docs.dagu.sh/overview/changelog"><img src="https://img.shields.io/github/release/dagu-org/dagu.svg?style=flat-square" alt="Latest Release"></a> <a href="https://github.com/dagu-org/dagu/actions/workflows/ci.yaml"><img src="https://img.shields.io/github/actions/workflow/status/dagu-org/dagu/ci.yaml?style=flat-square" alt="Build Status"></a> <a href="https://discord.gg/gpahPUjGRk"><img src="https://img.shields.io/discord/1095289480774172772?style=flat-square&logo=discord" alt="Discord"></a> <a href="https://bsky.app/profile/dagu-org.bsky.social"><img src="https://img.shields.io/badge/Bluesky-0285FF?style=flat-square&logo=bluesky&logoColor=white" alt="Bluesky"></a> </p> <p> <a href="https://docs.dagu.sh">Docs</a> | <a href="https://docs.dagu.sh/writing-workflows/examples">Examples</a> | <a href="https://discord.gg/gpahPUjGRk">Support & Community</a> </p> </div>

What is Dagu Workflow Engine?

Dagu is a self-contained, lightweight workflow engine for small teams. Define workflows in simple YAML, execute them anywhere with a single binary, compose complex pipelines from reusable sub-workflows, and distribute tasks across workers. All without requiring databases, message brokers, or code changes to your existing scripts.

Built for developers who want powerful workflow orchestration without the operational overhead. For a quick feel of how it works, take a look at the examples.

  • Zero-Ops: Single binary, file-based storage, under 128MB memory footprint
  • Full-Power: Docker steps, SSH execution, DAG composition, distributed mode, Git-based version management for DAGs & docs, 19+ executors
  • AI-Native: Built-in LLM agent creates, edits, and debugs workflows from natural language in the Web UI or as type: agent steps
  • Workflow Operator: Persistent AI operator for Slack and Telegram. Monitor runs, debug failures, recover incidents, and continue follow-up in the same conversation
  • Legacy Script Friendly: Orchestrate existing shell commands, Python scripts, Docker containers, or HTTP calls without modification.
  • Air-gapped Ready: Runs in isolated environments without external dependencies or network access
<div align="center"> <img src="./assets/images/dagu-demo.gif" alt="Demo" width="720"> </div>

| Cockpit (Kanban) | DAG Run Details | |---|---| | Cockpit | DAG Run Details |

Try it live: Live Demo (credentials: demouser / demouser)

The Dagu Difference

Keep workflow orchestration separate from business logic. Define workflows declaratively, stay zero-invasive to application code, and get a more capable alternative to cron without taking on Airflow-level complexity.

  Traditional Orchestrator           Dagu
  ┌────────────────────────┐        ┌──────────────────┐
  │  Web Server            │        │                  │
  │  Scheduler             │        │  dagu start-all  │
  │  Worker(s)             │        │                  │
  │  PostgreSQL            │        └──────────────────┘
  │  Redis / RabbitMQ      │         Single binary.
  │  Python runtime        │         Zero dependencies.
  └────────────────────────┘         Just run it.
    6+ services to manage

One binary. No Postgres. No Redis. No Python. Just dagu start-all.

Quick Start

1. Install

macOS/Linux:

curl -fsSL https://raw.githubusercontent.com/dagu-org/dagu/main/scripts/installer.sh | bash

Homebrew:

brew install dagu

Windows (PowerShell):

irm https://raw.githubusercontent.com/dagu-org/dagu/main/scripts/installer.ps1 | iex

The script installers open a guided wizard. They can install Dagu, add it to your PATH, set it up as a background service, create the first admin account, and install the Dagu AI skill when a supported AI tool is detected.

Homebrew, npm, Docker, Helm, and manual downloads install Dagu without the guided wizard. See the Installation docs for the full install guide and advanced options.

Docker:

docker run --rm -v ~/.dagu:/var/lib/dagu -p 8080:8080 ghcr.io/dagucloud/dagu:latest dagu start-all

Kubernetes (Helm):

helm repo add dagu https://dagucloud.github.io/dagu
helm repo update
helm install dagu dagu/dagu --set persistence.storageClass=<your-rwx-storage-class>

Replace <your-rwx-storage-class> with a StorageClass in your cluster that supports ReadWriteMany. If your cluster default storage class already supports ReadWriteMany, you can omit the flag. See charts/dagu/README.md for chart details, values, and source-checkout installation.

More options (npm, custom paths, specific versions): Installation docs

The script installers also support uninstall. See the Installation docs for --uninstall / -Uninstall, optional data purge, and AI skill removal.

2. Set up AI-assisted workflow authoring (optional)

If you use an AI coding tool (Claude Code, Codex, OpenCode, Gemini CLI, or Copilot CLI), install the Dagu skill so the AI can write correct DAG YAML.

If you installed Dagu with Homebrew, npm, or a manual binary download, run this after dagu is available on your PATH. The guided installer can offer the same step automatically.

Use Dagu's built-in installer:

dagu ai install --yes

Fallback via the shared skills CLI:

npx skills add https://github.com/dagu-org/dagu --skill dagu

For explicit skills directories, see the installation docs and the CLI reference.

3. Create your first workflow

When you first start Dagu with an empty DAGs directory, it automatically creates example workflows. Set DAGU_SKIP_EXAMPLES=true to skip this.

cat > ./hello.yaml << 'EOF'
steps:
  - echo "Hello from Dagu!"
  - echo "Running step 2"
EOF

4. Run the workflow

dagu start hello.yaml

5. Check the status

dagu status hello

6. Explore the Web UI

dagu start-all

Visit http://localhost:8080

Docker Compose: Clone the repo and run docker compose -f deploy/docker/compose.minimal.yaml up -d. See deployment docs for production setups.

Workflow Examples

Sequential Steps

Steps execute one after another:

type: chain
steps:
  - command: echo "Hello, dagu!"
  - command: echo "This is a second step"
%%{init: {'theme': 'base', 'themeVariables': {'background': '#18181B', 'primaryTextColor': '#fff', 'lineColor': '#888'}}}%%
graph LR
    A["Step 1"] --> B["Step 2"]
    style A fill:#18181B,stroke:#22C55E,stroke-width:1.6px,color:#fff
    style B fill:#18181B,stroke:#22C55E,stroke-width:1.6px,color:#fff

Parallel Steps

Steps with dependencies run in parallel:

type: graph
steps:
  - id: step_1
    command: echo "Step 1"
  - id: step_2a
    command: echo "Step 2a - runs in parallel"
    depends: [step_1]
  - id: step_2b
    command: echo "Step 2b - runs in parallel"
    depends: [step_1]
  - id: step_3
    command: echo "Step 3 - waits for parallel steps"
    depends: [step_2a, step_2b]
%%{init: {'theme': 'base', 'themeVariables': {'background': '#18181B', 'primaryTextColor': '#fff', 'lineColor': '#888'}}}%%
graph LR
    A[step_1] --> B[step_2a]
    A --> C[step_2b]
    B --> D[step_3]
    C --> D
    style A fill:#18181B,stroke:#22C55E,stroke-width:1.6px,color:#fff
    style B fill:#18181B,stroke:#22C55E,stroke-width:1.6px,color:#fff
    style C fill:#18181B,stroke:#22C55E,stroke-width:1.6px,color:#fff
    style D fill:#18181B,stroke:#3B82F6,stroke-width:1.6px,color:#fff

Docker Step

Run containers as workflow steps:

steps:
  - name: build-app
    container:
      image: node:20-alpine
    command: npm run build

SSH Execution

Run commands on remote machines:

steps:
  - name: deploy
    type: ssh
    config:
      host: prod-server.example.com
      user: deploy
      key: ~/.ssh/id_rsa
    command: cd /var/www && git pull && npm run build

Sub-DAG Composition

Invoke other DAGs as steps for hierarchical workflows:

steps:
  - name: extract
    call: etl/extract
    params: "SOURCE=s3://bucket/data.csv"
  - name: transform
    call: etl/transform
    params: "INPUT=${extract.outputs.result}"
    depends: [extract]
  - name: load
    call: etl/load
    params: "DATA=${transform.outputs.result}"
    depends: [transform]

For more examples, see the Examples documentation.

Features

Zero-Ops

  • Single binary installation, under 128MB memory
  • File-based storage — no PostgreSQL, no Redis, no message brokers
  • Air-gapped / offline capable
  • Cron scheduling with timezone support and zombie detection
  • High availability with scheduler failover

Full-Power

View on GitHub
GitHub Stars3.2k
CategoryDevelopment
Updated5h ago
Forks250

Languages

Go

Security Score

100/100

Audited on Apr 5, 2026

No findings