SkillAgentSearch skills...

Zapcode

TypeScript interpreter for AI agents. Written in Rust. 2µs cold start. Sandboxed. Alternative to MCP tool calling.

Install / Use

/learn @TheUncharted/Zapcode

README

<p align="center"> <img src="https://raw.githubusercontent.com/TheUncharted/zapcode/master/assets/logo.png" alt="Zapcode" width="160" /> </p> <h1 align="center">Zapcode</h1> <p align="center"><strong>Run AI-generated code. Safely. Instantly.</strong></p> <p align="center">A minimal, secure TypeScript interpreter written in Rust for use by AI agents</p> <p align="center"> <a href="https://github.com/TheUncharted/zapcode/actions"><img src="https://img.shields.io/github/actions/workflow/status/TheUncharted/zapcode/ci.yml?branch=master&label=CI" alt="CI"></a> <a href="https://crates.io/crates/zapcode-core"><img src="https://img.shields.io/crates/v/zapcode-core" alt="crates.io"></a> <a href="https://www.npmjs.com/package/@unchartedfr/zapcode"><img src="https://img.shields.io/npm/v/@unchartedfr/zapcode" alt="npm"></a> <a href="https://pypi.org/project/zapcode/"><img src="https://img.shields.io/pypi/v/zapcode" alt="PyPI"></a> <a href="https://github.com/TheUncharted/zapcode/blob/master/LICENSE"><img src="https://img.shields.io/github/license/TheUncharted/zapcode" alt="License"></a> </p>

Experimental — Zapcode is under active development. APIs may change.

Why agents should write code

AI agents are more capable when they write code instead of chaining tool calls. Code gives agents loops, conditionals, variables, and composition — things that tool chains simulate poorly.

But running AI-generated code is dangerous and slow.

Docker adds 200-500ms of cold-start latency and requires a container runtime. V8 isolates bring ~20MB of binary and millisecond startup. Neither supports snapshotting execution mid-function.

Zapcode takes a different approach: a purpose-built TypeScript interpreter that starts in 2 microseconds, enforces a security sandbox at the language level, and can snapshot execution state to bytes for later resumption — all in a single, embeddable library with zero dependencies on Node.js or V8.

Inspired by Monty, Pydantic's Python subset interpreter that takes the same approach for Python.

Alternatives

| | Language completeness | Security | Startup | Snapshots | Setup | |---|---|---|---|---|---| | Zapcode | TypeScript subset | Language-level sandbox | ~2 µs | Built-in, < 2 KB | npm install / pip install | | Docker + Node.js | Full Node.js | Container isolation | ~200-500 ms | No | Container runtime | | V8 Isolates | Full JS/TS | Isolate boundary | ~5-50 ms | No | V8 (~20 MB) | | Deno Deploy | Full TS | Isolate + permissions | ~10-50 ms | No | Cloud service | | QuickJS | Full ES2023 | Process isolation | ~1-5 ms | No | C library | | WASI/Wasmer | Depends on guest | Wasm sandbox | ~1-10 ms | Possible | Wasm runtime |

Why not Docker?

Docker provides strong isolation but adds hundreds of milliseconds of cold-start latency, requires a container runtime, and doesn't support snapshotting execution state mid-function. For AI agent loops that execute thousands of small code snippets, the overhead dominates.

Why not V8?

V8 is the gold standard for JavaScript execution. But it brings ~20 MB of binary size, millisecond startup times, and a vast API surface that must be carefully restricted for sandboxing. If you need full ECMAScript compliance, use V8. If you need microsecond startup, byte-sized snapshots, and a security model where "blocked by default" is the foundation rather than an afterthought, use Zapcode.

Benchmarks

All benchmarks run the full pipeline: parse → compile → execute. No caching, no warm-up.

| Benchmark | Zapcode | Docker + Node.js | V8 Isolate | |---|---|---|---| | Simple expression (1 + 2 * 3) | 2.1 µs | ~200-500 ms | ~5-50 ms | | Variable arithmetic | 2.8 µs | — | — | | String concatenation | 2.6 µs | — | — | | Template literal | 2.9 µs | — | — | | Array creation | 2.4 µs | — | — | | Object creation | 5.2 µs | — | — | | Function call | 4.6 µs | — | — | | Promise.resolve + await | 3.1 µs | — | — | | Promise.then (single) | 5.6 µs | — | — | | Promise.then chain (×3) | 9.9 µs | — | — | | Promise.all (3 promises) | 7.4 µs | — | — | | Async .map() (3 elements) | 11.6 µs | — | — | | Loop (100 iterations) | 77.8 µs | — | — | | Fibonacci (n=10, 177 calls) | 138.4 µs | — | — | | Snapshot size (typical agent) | < 2 KB | N/A | N/A | | Memory per execution | ~10 KB | ~50+ MB | ~20+ MB | | Cold start | ~2 µs | ~200-500 ms | ~5-50 ms |

No background thread, no GC, no runtime — CPU usage is exactly proportional to the instructions executed.

cargo bench   # run benchmarks yourself

Installation

TypeScript / JavaScript

npm install @unchartedfr/zapcode        # npm / yarn / pnpm / bun

Python

pip install zapcode                     # pip / uv

Rust

# Cargo.toml
[dependencies]
zapcode-core = "1.0.0"

WebAssembly

wasm-pack build crates/zapcode-wasm --target web

Basic Usage

TypeScript / JavaScript

import { Zapcode, ZapcodeSnapshotHandle } from '@unchartedfr/zapcode';

// Simple expression
const b = new Zapcode('1 + 2 * 3');
console.log(b.run().output);  // 7

// With inputs
const greeter = new Zapcode(
    '`Hello, ${name}! You are ${age} years old.`',
    { inputs: ['name', 'age'] },
);
console.log(greeter.run({ name: 'Zapcode', age: 30 }).output);

// Data processing
const processor = new Zapcode(`
    const items = [
        { name: "Widget", price: 25.99, qty: 3 },
        { name: "Gadget", price: 49.99, qty: 1 },
    ];
    const total = items.reduce((sum, i) => sum + i.price * i.qty, 0);
    ({ total, names: items.map(i => i.name) })
`);
console.log(processor.run().output);
// { total: 127.96, names: ["Widget", "Gadget"] }

// External function (snapshot/resume)
const app = new Zapcode(`const data = await fetch(url); data`, {
    inputs: ['url'],
    externalFunctions: ['fetch'],
});
const state = app.start({ url: 'https://api.example.com' });
if (!state.completed) {
    console.log(state.functionName);  // "fetch"
    const snapshot = ZapcodeSnapshotHandle.load(state.snapshot);
    const final_ = snapshot.resume({ status: 'ok' });
    console.log(final_.output);  // { status: "ok" }
}

See examples/typescript/basic/main.ts for more.

Python

from zapcode import Zapcode, ZapcodeSnapshot

# Simple expression
b = Zapcode("1 + 2 * 3")
print(b.run()["output"])  # 7

# With inputs
b = Zapcode(
    '`Hello, ${name}!`',
    inputs=["name"],
)
print(b.run({"name": "Zapcode"})["output"])  # "Hello, Zapcode!"

# External function (snapshot/resume)
b = Zapcode(
    "const w = await getWeather(city); `${city}: ${w.temp}°C`",
    inputs=["city"],
    external_functions=["getWeather"],
)
state = b.start({"city": "London"})
if state.get("suspended"):
    result = state["snapshot"].resume({"condition": "Cloudy", "temp": 12})
    print(result["output"])  # "London: 12°C"

# Snapshot persistence
state = b.start({"city": "Tokyo"})
if state.get("suspended"):
    bytes_ = state["snapshot"].dump()          # serialize to bytes
    restored = ZapcodeSnapshot.load(bytes_)    # load from bytes
    result = restored.resume({"condition": "Clear", "temp": 26})

See examples/python/basic/main.py for more.

<details> <summary><strong>Rust</strong></summary>
use zapcode_core::{ZapcodeRun, Value, ResourceLimits, VmState};

// Simple expression
let runner = ZapcodeRun::new(
    "1 + 2 * 3".to_string(), vec![], vec![],
    ResourceLimits::default(),
)?;
assert_eq!(runner.run_simple()?, Value::Int(7));

// With inputs and external functions (snapshot/resume)
let runner = ZapcodeRun::new(
    r#"const weather = await getWeather(city);
       `${city}: ${weather.condition}, ${weather.temp}°C`"#.to_string(),
    vec!["city".to_string()],
    vec!["getWeather".to_string()],
    ResourceLimits::default(),
)?;

let state = runner.start(vec![
    ("city".to_string(), Value::String("London".into())),
])?;

if let VmState::Suspended { snapshot, .. } = state {
    let weather = Value::Object(indexmap::indexmap! {
        "condition".into() => Value::String("Cloudy".into()),
        "temp".into() => Value::Int(12),
    });
    let final_state = snapshot.resume(weather)?;
    // VmState::Complete("London: Cloudy, 12°C")
}

See examples/rust/basic/basic.rs for more.

</details> <details> <summary><strong>WebAssembly (browser)</strong></summary>
<script type="module">
import init, { Zapcode } from './zapcode-wasm/zapcode_wasm.js';

await init();

const b = new Zapcode(`
    const items = [10, 20, 30];
    items.map(x => x * 2).reduce((a, b) => a + b, 0)
`);
const result = b.run();
console.log(result.output);  // 120
</script>

See examples/wasm/basic/index.html for a full playground.

</details>

AI Agent Usage

Vercel AI SDK (@unchartedfr/zapcode-ai)

npm install @unchartedfr/zapcode-ai ai @ai-sdk/anthropic  # or @ai-sdk/amazon-bedrock, @ai-sdk/openai

The recommended way — one call gives you { system, tools } that plug directly into generateText / streamText:

import { zapcode } from "@unchartedfr/zapcode-ai";
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

const { system, tools } = zapcode({
  system: "You are a helpful travel assistant.",
  tools: {
    getWeather: {
      description: "Get current weather for a city",
      para

Related Skills

View on GitHub
GitHub Stars65
CategoryDevelopment
Updated5d ago
Forks2

Languages

Rust

Security Score

100/100

Audited on Apr 1, 2026

No findings