SkillAgentSearch skills...

Zora

Zora — a long‑running local AI agent with provider registry and secure tool access.

Install / Use

/learn @ryaker/Zora
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Zora Header

Zora

Ask DeepWiki

Your personal AI agent. Local, secure, and memory-first.

Zora runs on your computer, takes real actions (reads files, runs commands, automates tasks), and actually remembers what it's doing between sessions — without giving up control of your system.

Text it from Signal. Approve risky actions from your phone. Sleep knowing it can't go rogue.

| | Zora | OpenClaw | |---|---|---| | Default posture | Locked — zero access until you grant it | Open — everything permitted unless restricted | | Safety rules location | policy.toml file, loaded before every action | In the conversation — erased by context compaction | | Skill marketplace | None — you install local files | ClawHub (800+ malicious skills found, ~20% of registry) | | E2E encrypted channel | Signal + Telegram | Not built-in | | Prompt injection defense | Dual-LLM quarantine (CaMeL architecture) | None | | Runaway loop prevention | Action budget + irreversibility scoring | None | | Misconfigured behavior | Does nothing | Full system access |


Divider

Why This Matters Right Now

In early 2026, OpenClaw went viral — 180,000 GitHub stars in weeks. Security teams immediately found the problems: 30,000+ instances exposed to the internet without authentication, 800+ malicious skills in its registry (~20% of all skills), and a CVSS 8.8 RCE vulnerability exploitable even against localhost.

Around the same time, Summer Yue — Meta's director of AI alignment — posted about her OpenClaw agent deleting 200+ emails after she'd told it to wait for approval before doing anything. She screamed "STOP OPENCLAW" at it. It kept going. The root cause: context compaction. As her inbox grew, the AI's working memory filled up and started summarizing — including compressing her original "wait for approval" instruction into nothing.

These aren't edge cases. They're architectural problems.

Zora was built to not have them.


Divider

The Security Architecture (Plain English)

1. Locked by Default

When you first install Zora, it can do nothing. Zero filesystem access, no shell commands, no network calls. You explicitly unlock capabilities during setup by choosing a trust level. OpenClaw's model is the opposite — everything is permitted unless you find and configure the restriction.

What this means: A misconfigured Zora does nothing. A misconfigured OpenClaw has full system access.

# ~/.zora/policy.toml — your rules, loaded before every action
[filesystem]
allow = ["~/Projects", "~/.zora/workspace"]
deny  = ["~/.ssh", "~/.gnupg", "~/Library", "/"]

[shell]
allow = ["git", "ls", "rg", "node", "npm"]
deny  = ["sudo", "rm", "curl", "chmod"]

[budget]
max_actions_per_session = 100   # runaway loop prevention

2. Policies Live in Config Files, Not the Conversation

This is the Summer Yue fix.

Her "wait for approval" instruction was text in the AI's context window — the running conversation. When the context got too long, the agent summarized it, and the instruction got compressed away. The AI wasn't defying her. It had genuinely forgotten.

Zora's safety rules live in ~/.zora/policy.toml — a config file loaded by the PolicyEngine before every single action. Not once at the start of a conversation. Before every action. Context can compact all it wants; the policy file doesn't change.

User says something → LLM decides what to do → PolicyEngine checks policy.toml → Allowed? Execute. Blocked? Refuse.

The LLM cannot talk the PolicyEngine into ignoring policy.toml. They don't share a channel.

3. No Centralized Skill Marketplace

OpenClaw has ClawHub — a centralized registry where third-party skills are auto-discovered and installed. Security researchers found 800+ malicious skills (~20% of the registry) delivering malware. The centralized model means one poisoned registry affects every user.

Zora supports skills, but there is no ClawHub equivalent. Skills are local files you install yourself — you control what you add and when. There's no background auto-update pulling code from a shared registry.

What this means: You can't poison a registry that doesn't exist. The supply chain attack surface scales with your own choices, not with a marketplace serving 180,000 users.

Zora scans every skill before it installs — and audits already-installed skills to catch anything dropped in manually:

# Install a .skill package — scanned before anything executes
zora-agent skill install my-skill.skill

# Audit all installed skills (catches git clone, copy-paste installs)
zora-agent skill audit

# Scan only, don't install
zora-agent skill install my-skill.skill --dry-run

# Raise threshold to catch medium-severity findings too
zora-agent skill install my-skill.skill --threshold medium

# Install anyway despite warnings (use with caution)
zora-agent skill install my-skill.skill --force

The scanner uses AST analysis (js-x-ray) to detect obfuscation, eval, data exfiltration, environment variable theft, curl | bash patterns, hardcoded secrets, and overly-permissive allowed-tools declarations — the exact patterns found in malicious ClawHub skills.

4. Action Budget

Every session has a maximum number of actions (default: 100). If an agent enters a loop, it hits the budget and stops — it doesn't run until something externally kills it. Budget is configurable per task type.

5. Full Audit Log

Every action Zora takes — every file read, every command run, every tool call — is written to a tamper-proof log. Not just "task completed" but the specific action, the path, the command, the timestamp, and the outcome.

zora-agent audit              # browse your log
zora-agent audit --last 50    # last 50 actions

OWASP coverage: Zora is hardened against the OWASP LLM Top 10 and OWASP Agentic Top 10 — prompt injection, tool-output injection, intent verification, action budgets, dry-run preview mode. See SECURITY.md for the technical breakdown.

6. Runtime Safety Layer

While policies define what Zora is allowed to do, the runtime safety layer adds a second tier that answers how risky is this specific action right now — and stops to ask when the answer is "too risky."

Irreversibility Scoring. Every tool call is scored 0–100 before it executes. Writing a file: 20. A git push to origin: 70. Sending a Signal message: 80. Deleting a file: 95. Scores are configurable in policy.toml:

[actions.thresholds]
warn      = 40   # log warning, allow
flag      = 65   # pause and ask for approval
auto_deny = 95   # block without asking

Human-in-the-loop Approval. When an action scores above the flag threshold, Zora pauses and routes to an approval queue. Enable in config.toml:

[approval]
enabled = true
channel = "telegram"    # or "signal"
timeout_s = 300         # auto-deny after 5 minutes

When triggered, you receive:

⚠️ Zora Action Approval Required
Action: git_push (origin main)
Risk: 70/100 (high)
Token: ZORA-A8F2

Reply: allow | deny | allow-30m | allow-session

You can approve once, approve for 30 minutes, approve for the session, or deny. Note: Channel delivery (Telegram/Signal) requires a configured messaging adapter. See Multi-Channel Messaging.

Session Risk Forecasting. Zora tracks three risk signals across a session — drift (has the agent veered from its original task?), salami (is it building toward something harmful in small steps?), and commitment creep (are actions getting progressively more irreversible?). When the composite score passes a threshold, the next action routes to the approval queue regardless of its individual score.

Agent Reputation. When a spawned subagent repeatedly gets its actions blocked, it enters cooldown: throttled (2s delay), then restricted (all actions need explicit approval), then shut down. Resets after 24 hours of clean behavior.

Per-Project Security Scope. You can give each subagent a tighter policy than the global one. A PM agent doesn't need shell access. A code-review agent doesn't need to send messages. Drop a .zora/security-policy.toml in your project and it inherits the global policy then applies additional restrictions — it can't loosen the global ceiling.

# .zora/security-policy.toml
[policy.tools]
denied = ["bash", "spawn_zora_agent"]

[policy.actions]
max_irreversibility_score = 60  # nothing above a git commit

Startup Security Audit. Every time the daemon starts, Zora scans its own configuration:

$ zora security
✓ PASS  ~/.zora/ permissions (700)
✓ PASS  config.toml permissions (600)
✗ FAIL  Bot token found in plaintext in config.toml:44
⚠ WARN  Node.js 18.x — upgrade to 20 LTS

zora security --fix   # auto-fixes WARN issues

FAILs block daemon startup. WARNs log and continue. All opt-in via config — enable only what you need.

For full configuration reference, see Runtime Safety Layer.


Divider

Memory That Survives

AI agents have two memory problems: they forget between sessions, and they forget within sessions when the context window fills up.

Between-session memory

Zora writes to `~/.zora/memory/

View on GitHub
GitHub Stars59
CategoryDevelopment
Updated2d ago
Forks6

Languages

TypeScript

Security Score

95/100

Audited on Apr 6, 2026

No findings