Akios
Secure runtime for multi-agent AI. Kernel sandboxing (seccomp-bpf), real-time PII redaction, Merkle audit trails.
Install / Use
/learn @akios-ai/AkiosREADME
<a href="https://pypi.org/project/akios/"><img src="https://img.shields.io/pypi/v/akios?color=%2334D058&label=PyPI" alt="PyPI"></a> <a href="https://pypi.org/project/akios/"><img src="https://img.shields.io/pypi/pyversions/akios?color=%2334D058" alt="Python"></a> <a href="https://github.com/akios-ai/akios/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-GPL--3.0--only-blue" alt="License"></a> <a href="https://github.com/akios-ai/akios"><img src="https://img.shields.io/badge/platform-Linux%20%7C%20macOS%20%7C%20Windows-lightgrey" alt="Platform"></a> <a href="https://github.com/akios-ai/akios/stargazers"><img src="https://img.shields.io/github/stars/akios-ai/akios?style=social" alt="Stars"></a>
</div> <br> <div align="center">AKIOS wraps any AI agent in a hardened security cage — kernel-level process isolation,<br> real-time PII redaction, cryptographic Merkle audit trails, and automatic cost kill-switches —<br> so you can deploy AI workflows in regulated environments without building security from scratch.
</div> <br> <div align="center">Quick Start · Architecture · Features · Documentation · Contributing
</div> <br>🏗️ Architecture
Every workflow step passes through five security layers before anything touches the outside world.
┌────────────────────────────────────┐
│ Untrusted AI Agents │
│ LLMs, Code, Plugins │
└──────────────────┬─────────────────┘
│
▼
╔════════════════════════════════════════════════════════════════╗
║ AKIOS SECURITY RUNTIME ║
║ ║
║ ┌──────────────────────────────────────────────────────────┐ ║
║ │ 1. Policy Engine allowlist verification │ ║
║ │ 2. Kernel Sandbox seccomp-bpf + cgroups v2 │ ║
║ │ 3. PII Redaction 44 patterns, 6 categories │ ║
║ │ 4. Budget Control cost kill-switches, token limits │ ║
║ │ 5. Audit Ledger Merkle tree, SHA-256, JSONL │ ║
║ └──────────────────────────────────────────────────────────┘ ║
║ ║
╚════════════════════════════════╤═══════════════════════════════╝
│
▼
┌────────────────────────────────────┐
│ Protected Infrastructure │
│ APIs, Databases, Cloud │
└────────────────────────────────────┘
🚀 Quick Start
pip install akios
akios init my-project && cd my-project
akios setup # Configure LLM provider (interactive)
akios run templates/hello-workflow.yml # Run inside the security cage
<details>
<summary><b>📦 Docker (all platforms — macOS, Linux, Windows)</b></summary>
curl -O https://raw.githubusercontent.com/akios-ai/akios/main/src/akios/cli/data/wrapper.sh
mv wrapper.sh akios && chmod +x akios
./akios init my-project && cd my-project
./akios run templates/hello-workflow.yml
</details>
What happens when you run a workflow
$ akios run workflow.yml
╔══════════════════════════════════════════════════════════╗
║ AKIOS Security Cage ║
╠══════════════════════════════════════════════════════════╣
║ 🔒 Sandbox: ACTIVE (seccomp-bpf + cgroups v2) ║
║ 🚫 PII Scan: 44 patterns loaded ║
║ 💰 Budget: $1.00 limit ($0.00 used) ║
║ 📋 Audit: Merkle chain initialized ║
╚══════════════════════════════════════════════════════════╝
▶ Step 1/3: read-document ─────────────────────────────
Agent: filesystem │ Action: read
✓ PII redacted: 3 patterns found (SSN, email, phone)
✓ Audit event #1 logged
▶ Step 2/3: analyze-with-ai ───────────────────────────
Agent: llm │ Model: gpt-4o │ Tokens: 847
✓ Prompt scrubbed before API call
✓ Cost: $0.003 of $1.00 budget
✓ Audit event #2 logged
▶ Step 3/3: save-results ─────────────────────────────
Agent: filesystem │ Action: write
✓ Output saved to data/output/run_20250211_143052/
✓ Audit event #3 logged
══════════════════════════════════════════════════════════
✅ Workflow complete │ 3 steps │ $0.003 │ 0 PII leaked
══════════════════════════════════════════════════════════
🎯 Why AKIOS?
AI agents can leak PII to LLM providers, run up massive bills, execute dangerous code, and leave no audit trail. Every team building with LLMs faces this security engineering burden.
AKIOS provides compliance-by-construction — security guarantees that are architectural, not bolted on:
| | Without AKIOS | With AKIOS | |:---:|:---|:---| | 🚫 | PII leaks to LLM providers | Automatic redaction before any API call | | 💸 | Runaway API costs | Hard budget limits with kill-switches | | 📋 | No audit trail for compliance | Cryptographic Merkle-chained logs | | 🔓 | Manual security reviews | Kernel-enforced process isolation | | 🤞 | Hope-based security | Proof-based security |
🛡️ Key Features
<table> <tr> <td width="50%">🔒 Kernel-Hard Sandbox
seccomp-bpf syscall filtering + cgroups v2 resource isolation on native Linux. Policy-based isolation on Docker (all platforms).
🚫 PII Redaction Engine
44 detection patterns across 6 categories: personal, financial, health, digital, communication, location. Covers SSN, credit cards, emails, phones, addresses, API keys, and more. Redaction happens before data reaches any LLM.
📋 Merkle Audit Trail
Every action is cryptographically chained. Tamper-evident JSONL logs with SHA-256 proofs. Export to JSON for compliance reporting.
</td> <td width="50%">💰 Cost Kill-Switches
Hard budget limits ($1 default) with automatic workflow termination. Token tracking across all providers. Real-time akios status --budget dashboard.
🤖 Multi-Provider LLM Support
OpenAI, Anthropic, Grok (xAI), Mistral, Gemini, AWS Bedrock, Ollama — swap providers in one line of config. All calls are sandboxed, audited, and budget-tracked.
</td> </tr> </table>📝 Workflow Schema
AKIOS orchestrates YAML-defined workflows through 6 secure agents — each running inside the security cage:
# workflow.yml — every step runs inside the cage
name: "document-analysis"
steps:
- name: "read-document"
agent: filesystem # 📁 Path-whitelisted file access
action: read
parameters:
path: "data/input/report.pdf"
- name: "analyze-with-ai"
agent: llm # 🤖 Token-tracked, PII-scrubbed
action: complete
parameters:
prompt: "Summarize this document: {previous_output}"
model: "gpt-4o"
max_tokens: 500
- name: "notify-team"
agent: http # 🌐 Domain-whitelisted, rate-limited
action: post
parameters:
url: "https://api.example.com/webhook"
json:
summary: "{previous_output}"
<details>
<summary><b>🔍 Preview what the LLM actually sees (after PII redaction)</b></summary>
$ akios protect show-prompt workflow.yml
Interpolated prompt (redacted):
"Summarize this document: The patient [NAME_REDACTED] with
SSN [SSN_REDACTED] was seen at [ADDRESS_REDACTED]..."
# 3 PII patterns redacted before reaching OpenAI
</details>
🔐 Security Levels
| Environment | Isolation | PII | Audit | Budget | Best For | |:---|:---|:---:|:---:|:---:|:---| | Native Linux | seccomp-bpf + cgroups v2 | ✅ | ✅ | ✅ | Production, maximum guarantees | | Docker (all platforms) | Container + policy-based | ✅ | ✅ | ✅ | Development, cross-platform |
Native Linux provides kernel-level guarantees where dangerous syscalls are physically blocked. Docker provides strong, reliable security across macOS, Linux, and Windows.
