SkillAgentSearch skills...

Airun

Run AI prompts like programs. Executable markdown with shebang, Unix pipes, and output redirection. Extends Claude Code with cross-cloud provider switching and any-model support — free local or 100+ cloud models.

Install / Use

/learn @andisearch/Airun
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop

README

Andi AIRun

Run AI prompts like programs. Executable markdown with shebang, Unix pipes, and output redirection. Supports multiple runtimes (Claude Code, Codex CLI) with cross-cloud provider switching and any-model support — free local or 100+ cloud models.

# Claude Code: any model or provider
ai                                        # Regular Claude subscription (Pro, Max)
ai --aws --opus --team --resume           # Resume chats on AWS w/ Opus 4.6 + Agent Teams
ai --ollama --bypass --model qwen3-coder  # Ollama local model with bypassPermissions set

# Codex CLI: OpenAI's coding agent
ai --codex                                # Codex with gpt-5.4 (default)
ai --codex --high                         # Codex with gpt-5.4 (flagship)
ai --codex --ollama                       # Codex with local Ollama models

# Run prompts like programs (works with any runtime)
ai --azure --haiku script.md
ai --codex script.md

# Script automation
cat data.json | ./analyze.md > results.txt

Choose your runtime — Claude Code or Codex CLI — and switch between clouds + models: AWS Bedrock, Google Vertex, Azure, Vercel, Anthropic API, OpenAI API. Supports free local models (Ollama, LM Studio) and 100+ alternate cloud models via Vercel AI Gateway or Ollama Cloud. Swap and resume conversations mid-task to avoid rate limits and keep working.

GitHub Stars Buy Me a Coffee Website Docs

What it does:

  • Multiple runtimes: Claude Code and Codex CLI with a single ai command (--cc, --codex)
  • Executable markdown with #!/usr/bin/env ai shebang for script automation
  • Unix pipe support: pipe data into scripts, redirect output, chain in pipelines
  • Cross-cloud provider switching: use Claude on AWS, Vertex, Azure, Anthropic API, or Codex on OpenAI, Azure OpenAI, OpenRouter + switch mid-conversation to bypass rate limits
  • Model tiers: --opus/--high, --sonnet/--mid, --haiku/--low — maps to each runtime's models
  • Cross-interpreter effort control: --effort low|medium|high|max
  • Session continuity: --resume picks up your previous chats with any model/provider
  • Non-destructive: plain claude and codex always work untouched as before

From Andi AI Search. Star this repo if it helps!

Latest: Codex CLI support (--codex), cross-interpreter effort levels (--effort), tool profiles (--profile). Script variables, live streaming, Agent Teams, Opus 4.6, local models (Ollama, LM Studio), persistent defaults, 100+ cloud models via Vercel. See CHANGELOG.md.

Quick Start

Supported Platforms:

  • macOS 13.0+
  • Linux (Ubuntu 20.04+, Debian 10+)
  • Windows 10+ via WSL

Prerequisites: At least one runtime installed — Claude Code or Codex CLI

# Install a runtime (one or both)
curl -fsSL https://claude.ai/install.sh | bash   # Claude Code (Anthropic)
npm install -g @openai/codex                      # Codex CLI (OpenAI)

# Install Andi AIRun
git clone https://github.com/andisearch/airun.git
cd airun && ./setup.sh

You can now run any markdown file as an AI script:

# Create an executable prompt
cat > task.md << 'EOF'
#!/usr/bin/env ai
Analyze my codebase and summarize the architecture.
EOF

chmod +x task.md
./task.md                         # Runs with your Claude subscription

Or run any markdown file directly:

ai task.md

Pipe data and redirect output (Unix-style automation):

cat data.json | ./analyze.md > results.txt    # Pipe in, redirect out
git log -10 | ./summarize.md                  # Feed git history to AI
./generate.md | ./review.md > final.txt       # Chain scripts together

Run scripts from the web (installmd.org support):

curl -fsSL https://andisearch.github.io/ai-scripts/analyze.md | ai
echo "Explain what a Makefile does" | ai         # Simple prompt

Minimal alternative: If you just want basic executable markdown without installing this repo, add a ai script to your PATH:

#!/bin/bash
claude -p "$(tail -n +2 "$1")"

This works for simple prompts but lacks provider switching, model selection, stdin piping, output formats, and session isolation. (credit: apf6)

Commands

| Command | Description | |---------|-------------| | ai / airun | Universal entry point - run scripts, switch providers | | ai update | Update AI Runner to the latest version | | ai-sessions | View active AI coding sessions | | ai-status | Show current configuration and provider status |

Running ai with no flags matches your claude defaults — if you're logged in with a subscription, ai uses it. Your environment is automatically restored on exit. Add provider flags to switch, or use ai --aws --opus --set-default to save your preferred provider and model for future runs.

Usage Examples

# Run a markdown script (auto-detects runtime + provider)
ai task.md

# Choose your runtime
ai --cc                           # Claude Code (default if installed)
ai --codex                        # Codex CLI (OpenAI)

# Claude Code providers
ai --aws                          # AWS Bedrock
ai --vertex                       # Google Vertex AI
ai --apikey                       # Anthropic API
ai --azure                        # Microsoft Azure Foundry
ai --vercel                       # Vercel AI Gateway
ai --pro                          # Claude Pro/Max subscription

# Codex CLI providers
ai --codex                        # OpenAI API (default)
ai --codex --azure                # Azure OpenAI (via config.toml)
ai --codex --profile openrouter   # OpenRouter (via config.toml profile)

# Local models (work with both runtimes)
ai --ollama                       # Ollama with Claude Code
ai --codex --ollama               # Ollama with Codex CLI
ai --lmstudio                     # LM Studio (MLX, Apple Silicon)

# Model tiers (map to each runtime's best models)
ai --opus task.md                 # Claude: Opus 4.6 / Codex: gpt-5.4
ai --sonnet task.md               # Claude: Sonnet 4.6 / Codex: gpt-5.3-codex (mid tier)
ai --haiku task.md                # Claude: Haiku 4.5 / Codex: gpt-5.4-mini
ai --codex --high task.md         # Codex with gpt-5.4

# Effort level (cross-interpreter reasoning control)
ai --effort high task.md          # Claude Code: deeper reasoning
ai --codex --effort max task.md   # Codex: maximum reasoning (xhigh)

# Stream output in real-time
ai --live --skip task.md

# Suppress --live status for CI/CD (clean stdout only)
ai --quiet ./live-script.md > output.md

# Live output + file redirect (narration to console, clean content to file)
./live-report.md > report.md

# Override script variables (--topic, --style match declared vars: names)
./summarize-topic.md --live --topic "the fall of rome" --style "peter griffin"

# Resume last conversation
ai --aws --resume

# Save runtime + provider + model as default
ai --codex --high --set-default   # Always use Codex + gpt-5.4
ai --aws --opus --set-default     # Always use Claude Code + AWS + Opus
ai --clear-default                # Remove saved default

# Smart auto permissions (AI classifier for Claude Code, sandbox for Codex)
ai --auto task.md

# Enable agent teams (Claude Code, experimental, interactive only)
ai --team                         # Auto display mode
ai --aws --opus --team            # Teams with AWS Bedrock + Opus

Features

Executable Markdown

Create markdown files with prompts that run directly via shebang:

#!/usr/bin/env ai
Summarize the architecture of this codebase.
#!/usr/bin/env -S ai --aws
Use AWS Bedrock to analyze this code.
#!/usr/bin/env -S ai --codex --high
Use Codex CLI with the flagship model to review this code.
#!/usr/bin/env -S ai --opus --live
Review this PR for security issues. Stream output in real-time.

Scripts that write files or run commands need a permission mode:

#!/usr/bin/env -S ai --skip
Run ./test/automation/run_tests.sh and report results.

(--skip is a shortcut for --dangerously-skip-permissions. See also --bypass for --permission-mode bypassPermissions.)

#!/usr/bin/env -S ai --auto
Run tests and fix any issues found.

(--auto uses an AI classifier (Claude Code) or sandbox (Codex) to auto-approve safe actions.)

#!/usr/bin/env -S ai --allowedTools 'Bash(npm test)' 'Read'
Run the test suite and report results. Do not modify any files.

(--allowedTools is a Claude Code flag, passed through by AI Runner.)

Usage:

chmod +x task.md
./task.md                          # Execute directly (uses shebang flags)
ai --vercel task.md                # Override: use Vercel instead
ai --opus task.md                  # Override: use Opus instead

Tip: Use #!/usr/bin/env -S (with -S) to pass flags in the shebang line. Standard env only accepts one argument, so #!/usr/bin/env ai --aws won't work — you need -S to split the string.

Flag precedence: CLI flags > shebang flags > saved defaults. Running ai --vercel task.md overrides the script's shebang provider. Shebang fla

View on GitHub
GitHub Stars136
CategoryCustomer
Updated16h ago
Forks9

Languages

Shell

Security Score

95/100

Audited on Mar 31, 2026

No findings