SkillAgentSearch skills...

Aigrc

AI Governance, Risk, Compliance - Open Standard & Toolkit

Install / Use

/learn @aigrc/Aigrc
About this skill

Quality Score

0/100

Category

Legal

Supported Platforms

Universal

README

<div align="center">
     ▄▄▄       ██▓  ▄████  ██▀███   ▄████▄
    ▒████▄    ▓██▒ ██▒ ▀█▒▓██ ▒ ██▒▒██▀ ▀█
    ▒██  ▀█▄  ▒██▒▒██░▄▄▄░▓██ ░▄█ ▒▒▓█    ▄
    ░██▄▄▄▄██ ░██░░▓█  ██▓▒██▀▀█▄  ▒▓▓▄ ▄██▒
     ▓█   ▓██▒░██░░▒▓███▀▒░██▓ ▒██▒▒ ▓███▀ ░
     ▒▒   ▓▒█░░▓   ░▒   ▒ ░ ▒▓ ░▒▓░░ ░▒ ▒  ░

Governance is a property, not a checkpoint.

The open specification and developer toolkit for AI governance engineering.

npm License AIGRC Toolchain Spec PRs Welcome

Website · Field Guide · Specification · Quick Start · Manifesto

</div>

The Problem

Most AI governance today is documentation theater.

Organizations build AI solutions and agents fast, write compliance documents later, and scramble at audit time. The evidence of what an AI system actually did is never collected at the moment of lowest cost — which is creation. We call this the Truth Tax: the compounding cost of retroactively verifying AI system behavior.

Three things are true about AI governance that most governance tools ignore:

  1. An agent without a business sponsor is a liability without an owner. The question isn't whether your agent works. It's whether anyone in your organization authorized it to exist and is accountable if something goes wrong.

  2. Static analysis fails for systems that reason. You cannot govern an AI agent the way you govern a database query. Agents make decisions. Enforcement needs to happen at runtime, not at code review.

  3. The people who build agents are now responsible for their behavior. Governance tools built for separate compliance teams are the wrong tools for this world. Governance needs to live where the work happens.

AIGRC is the open specification and toolkit that makes governance a property of the agent — embedded at creation, enforced at runtime, traceable to its authorization.


What This Repo Contains

| Section | What It Is | Who It's For | |---------|------------|-------------| | 📚 Field Guide | Educational content on AI governance engineering | Everyone | | 📐 Specification | The AIGRC governance specification | Architects, Standards Bodies | | 🛠️ Developer Toolkit | Working CLI, VS Code extension, GitHub Action | Engineers, Developers | | 🗺️ Roadmaps | Role-based learning paths | Career planners | | 📖 Resources | Curated papers, regulations, tools | Researchers, Compliance |


🛠️ Quick Start

Install

npm install -g @aigrc/cli

Scan Your Codebase

$ aigrc scan

  ╭──────────────────────────────────────────────────╮
  │  AIGRC Scan Results                              │
  │──────────────────────────────────────────────────│
  │                                                  │
  │  Frameworks detected:  3                         │
  │    • openai (Python)     → API Client            │
  │    • langchain (Python)  → Orchestration          │
  │    • anthropic (JS)      → API Client            │
  │                                                  │
  │  Risk Classification:  ⚠️  HIGH                   │
  │    Factors: customer-facing, tool-execution,     │
  │    autonomous-decisions                          │
  │                                                  │
  │  Asset card generated:                           │
  │    .aigrc/cards/my-agent.yaml                    │
  │                                                  │
  ╰──────────────────────────────────────────────────╯

Initialize Governance

aigrc init        # Create governance configuration
aigrc classify    # Classify risk level (EU AI Act aligned)
aigrc compliance  # Check compliance status
aigrc push        # Push governance artifacts to AIGOS

Developer Toolkit

| Tool | Purpose | Status | |------|---------|--------| | @aigrc/cli | Command-line governance interface | ✅ Shipped | | aigrc-vscode | VS Code extension — govern in your IDE | ✅ Shipped | | @aigrc/github-action | CI/CD governance gates | ✅ Shipped | | @aigrc/core | Core detection + classification library | ✅ Shipped | | @aigrc/mcp | Model Context Protocol server | ✅ Shipped | | @aigrc/i2e-bridge | Intent-to-Enforcement compiler | 🔨 Alpha | | @aigrc/i2e-firewall | Runtime policy enforcement | 🔨 Alpha | | @aigrc/sdk | Language SDKs (Python, Go) | 📋 Planned |

Supported Frameworks

<details> <summary><strong>30+ AI/ML frameworks detected automatically</strong></summary>

Python: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, AutoGen, PyTorch, TensorFlow, Keras, Transformers, scikit-learn, spaCy

JavaScript/TypeScript: OpenAI SDK, Anthropic SDK, Vercel AI SDK, LangChain.js, TensorFlow.js, Brain.js, Hugging Face

Model Files: .pt, .pth, .safetensors, .onnx, .h5, .keras, .pb, .gguf, .ggml, .bin, .mlmodel

</details>

📚 Field Guide

The AI Governance Field Guide teaches the principles and practice of governing AI systems — not as a compliance exercise, but as an engineering discipline.

| Chapter | Topic | Key Concept | |---------|-------|-------------| | 01 | Why Governance Is Broken | Documentation theater vs. structural accountability | | 02 | Governance as a Property | The difference between a checkpoint and a property | | 03 | The Golden Thread | Cryptographic link between agents and business authorization | | 04 | Intent to Enforcement | Bridging human-language policy and machine-executable constraint | | 05 | The Orphan Agent Problem | When no one owns the liability | | 06 | The Truth Tax | The economics of retroactive verification | | 07 | EU AI Act Practitioner's Guide | What the regulation actually requires | | 08 | Risk Classification in Practice | Beyond checkboxes — how risk tiers work |


📐 Specification

The AIGRC specification defines the data structures, protocols, and interfaces for AI governance. It is an open specification under development — early adopters shape the standard.

| Specification | Purpose | Status | |---------------|---------|--------| | Asset Cards | Structured metadata for AI assets | 📗 Stable | | Model Cards | Model documentation standard | 📗 Stable | | Data Cards | Dataset governance documentation | 📗 Stable | | Policy Bindings | Policy-to-asset attachment protocol | 📙 Draft | | Golden Thread | Business intent traceability protocol | 📙 Draft | | Governance Token | Runtime governance token protocol | 📙 Draft | | Incident Reports | Governance incident documentation | 📙 Draft | | Review Records | Audit review record schema | 📙 Draft | | Test Reports | Governance test evidence format | 📙 Draft | | OTel Conventions | OpenTelemetry semantic conventions | 📙 Draft | | Kill Switch | Emergency agent termination protocol | 📙 Draft |

We're developing an open governance specification — and we're inviting the institutions who implement it first to help shape it. Learn how to contribute →


Risk Classification

AIGRC classifies AI assets into four risk levels aligned with the EU AI Act:

  ┌─────────────────────────────────────────────────────────┐
  │                                                         │
  │   🔴  UNACCEPTABLE    Prohibited uses                   │
  │       Social scoring, subliminal manipulation           │
  │                                                         │
  │   🟠  HIGH            Significant oversight required     │
  │       Credit scoring, hiring, law enforcement           │
  │                                                         │
  │   🟡  LIMITED         Transparency obligations          │
  │       Chatbots, content generation, recommendations     │
  │                                                         │
  │   🟢  MINIMAL         Low impact, internal use          │
  │       Analytics, internal tools, research               │
  │                                                         │
  └─────────────────────────────────────────────────────────┘

Risk is determined by analyzing: autonomous decision-making, customer-facing usage, tool/function execution, external data access, PII processing, and high-stakes decision authority.


CI/CD Integration

Add governance gates to your pipeline:

# .github/workflows/governance.yml
name: AI Governance

on: [push, pull_request]

jobs:
  governance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: aigrc/aigrc@v1
        with:
          fail-on-high-risk: "true"
          create-pr-comment: "true"

🗺️ Roadmaps

View on GitHub
GitHub Stars7
CategoryLegal
Updated13h ago
Forks1

Languages

TypeScript

Security Score

85/100

Audited on Mar 24, 2026

No findings