Openengineer
orchestration layer for autonomous AI agents — using Linear or GitHub issues as your control plane
Install / Use
/learn @consuelohq/OpenengineerREADME
openengineer
bash is all you need.
orchestration layer for autonomous ai agents — using linear or github issues as your control plane. battle tested, peak production at 160+ tasks/night in production.
your job becomes: write good specs, enrich them with comments, chain 10-15 issues, run one bash command, and go to sleep. the agents do the rest.
we use kiro-cli and opencode — but the whole system is agent-swappable. use claude code, aider, cursor, or anything that accepts a prompt.
┌──────────────────────────────────────────────────────────────────────────┐
│ YOUR ISSUES (linear or github — your control plane) │
│ │
│ the full issue (title + description + every comment) becomes your │
│ agent's prompt. every comment you add makes the agent smarter. │
└──────────────────────────────────────────────────────────────────────────┘
│ │ │
│ @mention │ assign / label │ you run it yourself
▼ ▼ ▼
┌──────────────┐ ┌──────────────────┐ ┌──────────────────────────────┐
│ enrichment │ │ terminal │ │ .agent/run-tasks.sh │
│ │ │ dispatch │ │ │
│ @mention a │ │ │ │ chains 10-15 issues, │
│ research │ │ webhook opens │ │ runs for hours unattended, │
│ agent in │ │ a tmux session │ │ entire epic in one command │
│ the issue │ │ in the correct │ │ │
│ → it reads │ │ repo, runs the │ │ each task = fresh agent │
│ your code │ │ full pipeline │ │ session, no context bleed │
│ → enriches │ │ for that issue │ │ │
│ the spec │ │ │ │ you just run it straight │
│ → posts │ │ │ │ in your terminal: │
│ back to │ │ │ │ .agent/run-tasks.sh │
│ the issue │ │ │ │ │
└──────────────┘ └──────────────────┘ └──────────────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────────────────────────────────────────────────────────────────┐
│ YOUR AGENT (swappable) │
│ │
│ kiro-cli • opencode • claude code • aider • cursor • anything │
│ set AGENT_CLI in config.sh — run-tasks.sh doesn't care which │
└──────────────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────┐
│ THE ENRICHMENT WORKFLOW │
│ │
│ 1. you write your expected outcome (a few lines — what should happen) │
│ 2. @mention a research agent in the issue → it explores the codebase, │
│ reads patterns, finds the right files, and enriches your spec with │
│ implementation details, file:line references, and architecture │
│ context — all posted back as comments on the issue │
│ 3. the enriched spec goes to a worker agent → it doesn't research, │
│ it just works. every file, every line, every acceptance criterion │
│ is already in the prompt. │
│ 4. chain 10-15 enriched issues → .agent/run-tasks.sh → epic in hours │
└──────────────────────────────────────────────────────────────────────────┘
│
▼
┌────────────────────────────────────────────────────────────────────────┐
│ THE EXECUTION PIPELINE │
│ │
│ you run .agent/run-tasks.sh in your terminal. here's what happens: │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌────────────┐ │
│ │ fetch open │──►│ create │──►│ spawn agent │──►│ agent │ │
│ │ issues from │ │ staging │ │ with full │ │ implements │ │
│ │ linear/gh │ │ branch + PR │ │ issue as │ │ + self- │ │
│ │ │ │ │ │ the prompt │ │ reviews │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────┬──────┘ │
│ │ │
│ ┌─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌────────────┐ │
│ │ 13 quality │──►│ 14th gate: │──►│ push to │──►│ coderabbit │ │
│ │ checks │ │ re-read │ │ github, │ │ reviews │ │
│ │ (code- │ │ task, update│ │ update PR, │ │ the PR → │ │
│ │ review.sh) │ │ workpad │ │ post to │ │ findings │ │
│ │ │ │ │ │ linear │ │ → linear │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────┬──────┘ │
│ │ │
│ ┌───────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ triage: │──►│ merge → │──►│ slack │ │
│ │ agent picks │ │ deploy → │ │ notification│ │
│ │ up review │ │ health │ │ + scenario │ │
│ │ findings, │ │ check → │ │ test │ │
│ │ fixes them │ │ fix loop │ │ results │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ nag hook: every 7 tool calls, the agent is forced to re-read the │
│ original requirements. prevents drift. catches forgotten criteria. │
│ │
│ repeat for each issue in the chain. 10-15 tasks, hours of autonomous │
│ work, one command. │
└────────────────────────────────────────────────────────────────────────┘
your issues are your orchestration layer
linear (or github issues) isn't just your task tracker — it's your whole control plane. the entire issue becomes your agent's prompt:
- title → what to do
- description → your full spec (acceptance criteria, file references, architecture notes)
- comments → your enrichment context (all comments get concatenated into the prompt)
- labels → routing (
kirolabel → kiro-cli,opencodelabel → opencode CLI, add your own) - workflow states → your automated pipeline (open → in progress → in review → done)
issues run oldest-first, so multi-part specs execute in order. write part 1, part 2, part 3 as separate issues — they chain sequentially.
# what your agent actually sees (simplified):
# title: implement password reset flow
# description: <your full spec with acceptance criteria>
# comments:
# you (2026-03-08): the reset token should expire after 1 hour
# you (2026-03-08): use the existing email service, don't create a new one
# research-agent (2026-03-08): found AuthService at src/auth/auth.service.ts:84,
# uses JWT with 30m expiry. reset flow should follow the same pattern.
# existing EmailService at src/email/email.service.ts — use sendTemplate().
every comment you add makes the agent smarter about that task. this is the leverage — you're not writing code, you're writing context. the research agent enriches your specs so the worker agent doesn't waste cycles exploring — it just builds.
run-tasks.sh — the full pipeline
~2000 lines of battle-tested orchestration. here's what happens when it runs:
1. fetch issues
queries linear for issues matching your label + open state. supports linear, github issues, and github projects as task sources. oldest-first ordering for sequential multi-part specs.
.agent/run-tasks.sh --linear # all open issues with your label
.agent/run-tasks.sh --linear --max-tasks 5 # cap at 5
.agent/run-tasks.sh --issue DEV-1076 # single issue
.agent/run-tasks.sh --dry-run # preview without running
2. create staging branch + PR
creates (or reuses) a staging → main PR. all task commits land on one branch, one PR. you review one diff, not twenty.
3. for each issue: spawn agent subprocess
each task gets a fresh, isolated agent session. no context bleed between tasks.
workpad creation — extracts acceptance criteria from the issue body into a per-task scratch file. the agent writes implementation notes here as it works. institutional memory that persists after the run.
prompt construction — the agent gets:
- the full linear issue (title + description + all comments merged)
- path to its workpad file
- behavioral instructions (read standards first, implement, self-review, check acceptance criteria)
# the actual prompt template (simplified):
# you are working on Linear issue DEV-1076.
# title: implement password reset flow
# workpad file: .agent/workpads/DEV-1076.md
# description: <full spec + all comments>
# commit with message: 'fix(DEV-1076): brief descripti
Related Skills
node-connect
354.5kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
112.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
354.5kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
354.5kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
