SkillAgentSearch skills...

DashClaw

🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require approvals, and produce audit-ready decision trails.

Install / Use

/learn @ucsandman/DashClaw

README

<div align="center"> <img src="public/images/logo-circular.png" alt="DashClaw" width="240" /> <h1>DashClaw</h1> <p><strong>Decision Infrastructure for AI agents.</strong></p> <p>Stop agents before they make expensive mistakes.</p> <p><sub>Try it in 10 seconds</sub></p> <pre><code>npx dashclaw-demo</code></pre> <p><sub>No setup. Opens Decision Replay automatically.</sub></p> <img src="public/images/demo-gif2.gif" alt="DashClaw Demo" width="1000" /> <br /> <p><strong>Works with:</strong></p> <p>LangChain • CrewAI • OpenClaw • OpenAI • Anthropic • AutoGen • Claude Code • Codex • Gemini CLI • Custom agents</p> <br /> <p>Intercept decisions. Enforce policies. Record evidence.</p> <br /> <p><strong>Agent &rarr; DashClaw &rarr; External Systems</strong></p> <p>DashClaw sits between your agents and your external systems. It evaluates policies before an agent action executes and records verifiable evidence of every decision.</p> <br /> <p><a href="https://dashclaw.io/demo">View Live Demo</a></p>

<a href="https://dashclaw.io"><img src="https://img.shields.io/badge/website-dashclaw.io-orange?style=flat-square" alt="Website" /></a> <a href="https://dashclaw.io/docs"><img src="https://img.shields.io/badge/docs-SDK%20%26%20API-blue?style=flat-square" alt="Docs" /></a> <a href="https://github.com/ucsandman/DashClaw/stargazers"><img src="https://img.shields.io/github/stars/ucsandman/DashClaw?style=flat-square&color=yellow" alt="GitHub stars" /></a> <a href="https://github.com/ucsandman/DashClaw/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-green?style=flat-square" alt="License" /></a> <a href="https://www.npmjs.com/package/dashclaw"><img src="https://img.shields.io/npm/v/dashclaw?style=flat-square&color=orange" alt="npm" /></a> <a href="https://pypi.org/project/dashclaw/"><img src="https://img.shields.io/pypi/v/dashclaw?style=flat-square&color=orange" alt="PyPI" /></a>

</div> <br />

Deploy

Deploy with Vercel

$0 to deploy — Vercel free tier + Neon free tier. Click the button, add the Neon integration when prompted, fill in the environment variables, and you're live. Database schema is created automatically during the build — no manual migration step required.

After deploy

  1. Open your app — Visit https://your-app.vercel.app and sign in.
  2. Copy the snippet — Mission Control shows a ready-to-run code example with your API key and base URL pre-filled.
  3. Run itnode --env-file=.env demo.js and watch governance happen.

Optional

  • Live decision stream — Create a free Upstash Redis instance and add UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN in Vercel env vars. Without this, Mission Control uses in-memory events (fine for getting started, but won't persist across serverless invocations).
  • Verify at /setup — Open https://your-app.vercel.app/setup to confirm all systems are green.

Connect Your Agent

Three ways to get governed — pick what fits your workflow:

Option 1: Install the skill (30 seconds)

Give your AI agent the dashclaw-platform-intelligence skill and it instruments itself — no code changes, no manual wiring. The agent registers with DashClaw, sets up guard checks, records decisions, and starts tracking assumptions automatically.

# Download the skill into your agent's skill directory
cp -r public/downloads/dashclaw-platform-intelligence .claude/skills/

Set two environment variables and your agent is governed on its next run:

export DASHCLAW_BASE_URL=https://your-dashclaw-instance.com
export DASHCLAW_API_KEY=your_api_key

This is the fastest path. We gave our own OpenClaw agent the skill and it put itself on DashClaw in one conversation.

Option 2: Drop in Claude Code hooks (zero-code)

Govern every Bash, Edit, Write, and MultiEdit call Claude Code makes — no SDK instrumentation needed:

cp hooks/dashclaw_pretool.py  .claude/hooks/
cp hooks/dashclaw_posttool.py .claude/hooks/

Set DASHCLAW_BASE_URL, DASHCLAW_API_KEY, and DASHCLAW_HOOK_MODE=enforce. Every tool call becomes a governed, replayable decision record. See hooks/README.md for the full guide.

Option 3: Use the SDK (full control)

For custom agents where you want precise control over what gets governed:

npm install dashclaw    # Node.js
pip install dashclaw    # Python

The 4-step governance loop — Guard, Record, Verify, Outcome — is covered in the Quickstart below.

For framework-specific step-by-step guides (Claude Code, OpenAI Agents SDK, LangGraph, CrewAI), visit /connect on your DashClaw instance.


What is DashClaw?

DashClaw is not observability. It is control before execution.

AI agents generate actions from goals and context. They do not follow deterministic code paths. Therefore debugging alone is insufficient. Agents require governance.

DashClaw provides decision infrastructure to:

  • Intercept risky agent actions.
  • Enforce policy checks before execution.
  • Require human approval (HITL) for sensitive operations.
  • Record verifiable decision evidence to detect reasoning drift.
  • Track agent learning velocity — the only platform that measures whether your agents are getting better or worse over time.

⚡ See DashClaw stop an agent from deleting production data

Run DashClaw instantly with one command.

npx dashclaw-demo

What happens:

  1. A local DashClaw demo runtime starts automatically.
  2. A demo agent attempts a high-risk production deploy.
  3. DashClaw intercepts the decision and blocks the action before execution.
  4. Your browser opens directly to the Decision Replay showing the governance trail.

No repo clone. No environment variables. No configuration. Just one command.


What you’ll see

  • 🔴 High risk score (85)
  • 🛑 Policy requires approval before deploy
  • 🧠 Assumptions recorded by the agent
  • 📊 Full decision timeline with outcome

DashClaw Decision Replay


Platform Overview

<div align="center">

Mission Control — Real-time strategic posture, decision timeline, and intervention feed.

<img src="public/images/screenshots/Mission Control.png" alt="Mission Control" width="1000" />

<br /><br />

Approval Queue — Human-in-the-loop intervention with risk scores and one-click Allow / Deny.

<img src="public/images/screenshots/Approvals.png" alt="Approval Queue" width="1000" />

<br /><br />

Guard Policies — Declarative rules that govern agent behavior before actions execute.

<img src="public/images/screenshots/policies.png" alt="Guard Policies" width="1000" />

<br /><br />

Drift Detection — Statistical behavioral drift analysis with critical alerts when agents deviate from baselines.

<img src="public/images/screenshots/Assumptions.png" alt="Drift Detection" width="1000" /> </div>

🏗️ First Real Agent

Fastest: Install the dashclaw-platform-intelligence skill and let your agent instrument itself.

Hands-on: Use the OpenAI Governed Agent Starter to see the SDK in a real customer communication workflow:

cd examples/openai-governed-agent
npm install && cp .env.example .env
# Add your DASHCLAW_API_KEY to .env
node index.js

View the Starter Source


Quickstart

1. Install the SDK

Node.js:

npm install dashclaw

Python:

pip install dashclaw

2. Create the Client

Node.js:

import { DashClaw, GuardBlockedError, ApprovalDeniedError } from 'dashclaw';

const claw = new DashClaw({
  baseUrl: process.env.DASHCLAW_BASE_URL, // or your DashClaw instance URL
  apiKey: process.env.DASHCLAW_API_KEY,
  agentId: 'my-agent'
});

Python:

from dashclaw.client import DashClaw, GuardBlockedError, ApprovalDeniedError
import os

claw = DashClaw(
    base_url=os.environ["DASHCLAW_BASE_URL"],
    api_key=os.environ.get('DASHCLAW_API_KEY'),
    agent_id='my-agent'
)

3. Run Your First Governed Action

The minimal governance loop wraps your agent's real-world actions:

// 1. Guard -> "Can I do X?"
const decision = await claw.guard({
  action_type: 'database_query',
  risk_score: 50
});

// 2. Record -> "I am attempting X."
const action = await claw.createAction({
  action_type: 'database_query',
  declared_goal: 'Extract user statistics'
});

// 3. Verify -> "I believe Y is true while doing X."
await claw.recordAssumption({
  action_id: action.action_id,
  assumption: 'The database is read-only for this credentials'
});

try {
  // Execute the real action here...
  // ...

  // 4. Outcome -> "X finished with result Z."
  await claw.updateOutcome(action.action_id, { status: 'completed' });
} catch (error) {
  await claw.updateOutcome(action.action_id, { status: 'failed', error_message: error.message });
}

Learning loop: The guard response includes a learning field with your agent's historical performance — recent scores, drift status, and patterns learned from past outcomes. Your agent gets smarter every cycle.


CLI Approval Channel

Approve agent actions from the terminal

View on GitHub
GitHub Stars181
CategoryOperations
Updated1h ago
Forks37

Languages

JavaScript

Security Score

100/100

Audited on Mar 28, 2026

No findings