SkillAgentSearch skills...

Binex

Debuggable runtime for AI agent workflows. DAG pipelines, artifact lineage, and replayable runs.

Install / Use

/learn @Alexli18/Binex
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<a id="readme-top"></a>

<div align="center"> <h1> <br> Binex <br> </h1> <p align="center"> <strong>Open-source visual orchestrator for AI agent workflows</strong> <br> Build, run, debug, and replay multi-agent pipelines — 100% locally. </p> <p> <a href="https://pypi.org/project/binex/"><img src="https://img.shields.io/pypi/v/binex?style=flat-square&color=orange" alt="PyPI"></a> <a href="https://pypi.org/project/binex/"><img src="https://img.shields.io/pypi/pyversions/binex?style=flat-square" alt="Python"></a> <a href="https://github.com/Alexli18/binex/blob/master/LICENSE"><img src="https://img.shields.io/github/license/Alexli18/binex?style=flat-square" alt="License"></a> <a href="https://github.com/Alexli18/binex/actions"><img src="https://img.shields.io/github/actions/workflow/status/Alexli18/binex/ci.yml?style=flat-square&label=CI" alt="CI"></a> <a href="https://alexli18.github.io/binex/"><img src="https://img.shields.io/badge/docs-online-blue?style=flat-square" alt="Docs"></a> <a href="https://github.com/Alexli18/binex/stargazers"><img src="https://img.shields.io/github/stars/Alexli18/binex?style=flat-square" alt="Stars"></a> </p> <p> <a href="#demo">Demo</a> &middot; <a href="#installation">Install</a> &middot; <a href="#web-ui">Web UI</a> &middot; <a href="#features">Features</a> &middot; <a href="https://alexli18.github.io/binex/">Docs</a> &middot; <a href="https://github.com/Alexli18/binex/issues">Issues</a> </p> </div> <br>

Demo

1. Start in seconds

<div align="center"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/assets/demo-start.gif" alt="Quick Start" width="800"> <br><sub>Install, run <code>binex ui</code>, and you're building workflows</sub> </div>

2. Build & run custom workflows

<div align="center"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/assets/demo-custom.gif" alt="Custom Workflow" width="800"> <br><sub>Drag & drop nodes, configure models, run with human input</sub> </div>

3. Explore & debug results

<div align="center"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/assets/demo-explore.gif" alt="Explore Results" width="800"> <br><sub>Debug, trace, diff, lineage — full post-mortem inspection</sub> </div> <p align="right">(<a href="#readme-top">back to top</a>)</p>

What is Binex?

Binex is an open-source, fully local runtime for AI agent workflows. No cloud. No telemetry. No vendor lock-in.

pip install binex
binex ui

That's it. Browser opens. You're building AI workflows.

Why Binex?

  • 100% local — your data never leaves your machine
  • 100% open source — MIT licensed, audit every line
  • Zero telemetry — no tracking, no analytics, no surprises
  • Full debuggability — every input, output, prompt, and cost is visible
  • Any model — OpenAI, Anthropic, Google, Ollama, OpenRouter, DeepSeek, and 40+ more via LiteLLM
<p align="right">(<a href="#readme-top">back to top</a>)</p>

Installation

Requires Python 3.11+

pip install binex

With extras:

pip install binex[langchain]    # LangChain Runnables
pip install binex[crewai]       # CrewAI Crews
pip install binex[autogen]      # AutoGen Teams
pip install binex[telemetry]    # OpenTelemetry tracing
pip install binex[rich]         # Rich colored CLI output
<p align="right">(<a href="#readme-top">back to top</a>)</p>

Web UI

Launch the visual workflow editor:

binex ui

Visual Drag & Drop Editor

<div align="center"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/screenshots/new-editor.png" alt="Workflow Editor" width="800"> <br><sub>Collapsible node sections, tool picker with 10 built-in tools, MCP config, Visual ↔ YAML sync</sub> </div> <br>

6 node types: LLM Agent, Local Script, Human Input, Human Approve, Human Output, A2A Agent

  • 20+ preset models including 8 free OpenRouter models
  • Built-in prompt library (Planner, Researcher, Analyzer, Writer, Reviewer, Summarizer)
  • Tool Picker — 10 built-in tools, MCP server integration, custom Python tools
  • Collapsible sections — Model, Prompt, Tools, Advanced per LLM node
  • Workflow Settings panel — configure MCP servers (stdio/HTTP) and cron schedules
  • Switch between Visual and YAML modes — changes sync both ways (including tools & MCP)
  • Real-time cost estimation as you build
  • Custom model input — use any litellm-compatible model

Dashboard

<div align="center"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/screenshots/new-dashboard.png" alt="Runs Dashboard" width="800"> <br><sub>All runs at a glance — status, cost, duration</sub> </div>

Debugging & Analysis

<div align="center"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/screenshots/new-debug.png" alt="Debug View" width="380"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/screenshots/new-trace.png" alt="Trace Timeline" width="380"> <br><sub>Left: Node-by-node debug inspection. Right: Gantt timeline with anomaly detection.</sub> </div>

Run Comparison

<div align="center"> <img src="https://raw.githubusercontent.com/Alexli18/binex/master/screenshots/new-diff.png" alt="Diff View" width="800"> <br><sub>Side-by-side diff with filtering: changed, failed, cost delta</sub> </div>

19 Pages — Full CLI Parity

| Category | Pages | |----------|-------| | Workflows | Browse, Visual Editor (with tool picker & MCP config), Scaffold Wizard | | Runs | Dashboard, RunLive (SSE), RunDetail | | Analysis | Debug (input/output artifacts), Trace (Gantt timeline), Diagnose (root-cause), Lineage (artifact graph) | | Comparison | Diff (side-by-side with filter bar, compare with previous run), Bisect (NodeMap, DAG visualization, divergence metrics) | | Costs | Cost Dashboard (charts), Budget Management | | System | Doctor (health), Plugins, Gateway, Export, Scheduler (cron) |

Navigation

Sidebar organized into 4 groups: Build (Editor, Scaffold), Runs (Dashboard), Analyze (Compare, Bisect), System (Gateway, Plugins, Doctor). Run-specific pages (Debug, Trace, Diagnose, Lineage, Costs) open from run context.

Replay

Debug any node → click Replay → swap the model or prompt → re-run just that node. No re-running the entire pipeline.

<p align="right">(<a href="#readme-top">back to top</a>)</p>

Quickstart

CLI

# Zero-config demo
binex hello

Tip: Runs a 2-node demo workflow (producer → consumer), no API keys needed.

# Run a workflow
binex run examples/simple.yaml

Tip: Uses your configured LLM provider. Set OPENAI_API_KEY or use ollama for fully local runs.

# Inspect the run
binex debug latest
binex trace latest

Tip: debug shows per-node inputs/outputs. trace shows the execution timeline as a Gantt chart.

Web UI

binex ui

Tip: Opens the browser automatically. Use --port 9000 to change the port, --no-browser to skip auto-open.

Create a Workflow

name: research-pipeline
nodes:
  input:
    agent: "human://input"
    outputs: [output]

  planner:
    agent: "llm://gemini/gemini-2.5-flash"
    system_prompt: "Break this topic into research questions"
    tools:
      - "builtin://web_search"
      - "builtin://calculator"
    depends_on: [input]
    outputs: [output]

  researcher:
    agent: "llm://openrouter/google/gemma-3-27b-it:free"
    system_prompt: "Investigate and report findings"
    tools:
      - "builtin://fetch_url"
    depends_on: [planner]
    outputs: [output]

  output:
    agent: "human://output"
    depends_on: [researcher]
    outputs: [output]
<p align="right">(<a href="#readme-top">back to top</a>)</p>

Features

Agent Adapters

| Prefix | Description | |--------|-------------| | local:// | In-process Python callable | | llm:// | LLM via LiteLLM (40+ providers) | | a2a:// | Remote agent via A2A protocol | | human://input | Free-text input from user | | human://approve | Approval gate with conditional branching | | human://output | Display results to user | | builtin:// | 10 built-in tools (calculator, web_search, shell_command, etc.) | | mcp:// | MCP server tools (stdio or HTTP transport) | | python:// | Custom Python function as tool | | langchain:// | LangChain Runnable (plugin) | | crewai:// | CrewAI Crew (plugin) | | autogen:// | AutoGen Team (plugin) |

CLI Commands

| Command | Description | |---------|-------------| | binex run | Execute a workflow | | binex ui | Launch Web UI | | binex debug | Post-mortem inspection | | binex trace | Execution timeline | | binex replay | Re-run with agent swaps | | binex diff | Compare two runs | | binex diagnose | Root-cause failure analysis | | binex bisect | Find first divergence between two runs | | binex cost show | Cost breakdown per node | | binex explore | Interactive TUI dashboard | | binex scaffold | Generate workflow from DSL | | binex export | Export to CSV/JSON | | binex doctor | System health check | | binex hello | Zero-config demo | | binex list | List available workflows | | binex start | Create a new project interactively | | binex init | Deprecated alias for binex start | | binex validate | Validate workflow YAML | | binex cancel | Cancel a running workflow | | binex artifacts | Inspect artifacts | | binex dev | Local development environment | | binex gateway | A2A Gateway management | | binex plugins | Manage adapter plugins | | binex workflow | Workflow versioning & inspection | | binex scheduler start | Start cron-based workflow scheduler | | binex scheduler list | List scheduled workflows | | binex scheduler add/remove | Register/unregister workflow files |

LLM Providers

**Ope

View on GitHub
GitHub Stars55
CategoryDevelopment
Updated3d ago
Forks10

Languages

Python

Security Score

100/100

Audited on Mar 31, 2026

No findings