SkillAgentSearch skills...

Zexplorer

HTML processor engine on steroids. Give eyes to your LLM

Install / Use

/learn @ndrean/Zexplorer

README

zexplorer (zxp)

Zig support

zexplorer is a fast, zero-dependency HTML+JS engine. Think ffmpeg for the web. You can use it as a command-line tool, an HTTP dev-server or an MCP server for LLM agents — no browser, no Node.js, no Python, no runtime.

The MCP service gives your LLM agent eyes and persistent local storage, zero infra.

<p align="center"> <img src="https://github.com/ndrean/zexplorer/blob/main/images/zexplorer.png" alt="logo" width="700" height="700" /> </p> <br>

TL;DR:

  • Cold start: ~3ms
  • Memory: ~12MB
  • Zero dependencies. Single statically-compiled binary.
  • Stateless by default, Stateful on demand with a zero-config embedded SQLite storage for local persistence.
  • Pipelines: Native support for parsing Markdown, CSV, and SVG.
  • Outputs: Return raw data (JSON, strings, binary arrays), Markdown or render layouts (Flexbox) to PNG, JPEG, WEBP, and PDF.
  • MCP service - the token saver - let the LLM run scripts server-side, such as scrape, transform or render and get back the result.
  • Usage: Composable CLI tool or a high-concurrency HTTP rendering service.

What can it do?

It can:

  • Scrape — fetch a URL, hydrate React, render Vue/Svelte/Lit/SolidJS, WebComponents, extract data. No headless browser.
  • Stream — consume LLM output via SSE (currently local Ollama, extendable to any OpenAI-compatible endpoint); receive HTML chunks and rebuild a live DOM incrementally.
  • Expose — serve as an MCP server so LLM agents (Claude Desktop, Gemini CLI…) can run_script using the custom API, or use the shortcuts render_html, render_markdown, render_url, and receive data or screenshots directly in the conversation.
  • RenderFlexbox based only, all static: HTML+JS+SVG such as D3, Chart.js, Leaflet, ECharts. Basic support for Canvas API, output PNG/JPEG/WEBP/PDF.
  • Generate — design an SVG in Figma, plug in data, batch-produce OG images or PDF reports.
  • Sanitize — DOM+CSS-aware HTML sanitization (stylesheets, inline styles, XSS/mXSS). Built-in.
  • Run JS — execute ES2020 scripts against a real DOM with fetch, timers, workers, and an event loop.
  • Store & Persist - drop text, blobs, images in the local storage, no ceremony.

Limitations:

  • no TypeScript support. JSX is supported via "tagged templates" (using htm).
  • cannot scrape arbitrary bot protected public websites,
  • cannot paint complex CSS using grid-2d nor position:fixed, no CSS functions or variables nor complex canvas nor media queries...

Security

If you use your own trusted code, you can skip sanitization entirely. For untrusted content:

[!WARNING] All layers are best-effort — see SECURITY.md for full details.

  • Content sanitization — DOM+CSS-aware: stylesheets, inline styles, iframes, SVG/MathML, DOM clobbering, URI schemas, XSS/mXSS. Tested against H5SC, OWASP, PortSwigger, and DOMPurify.
  • Filesystem sandbox — kernel-enforced openat() with symlink blocking, traversal rejection, cross-device check.
  • Network hardening — timeouts, redirect/size limits, SSRF pre-flight filtering, HTTPS-only remote imports.
  • Resource limits — worker fan-out caps, busy-loop interrupts, max stack/GC/memory, wall-clock deadlines.

Examples

| Example | What it shows | Output | CLI | Server | | ------- | ------------- | ------ | :---: | :---: | | MCP server | Give Claude Desktop / Gemini visual eyes | PNG | – | ✓ | | LLM generative UI | Ollama/OpenAI SSE → DOM → image | WEBP | ✓ | ✓ | | Dynamic HTML card | htm tagged templates → paintDOM | PNG | ✓ | ✓ | | CSS grid / flexbox layout | grid-1D + flexbox → terminal image | PNG | ✓ | ✓ | | Scrape Hacker News | fetch → DOM query → structured data | JSON | ✓ | ✓ | | Vercel SPA scrape | Next.js hydration → waitForSelector | JSON | ✓ | ✓ | | Vercel site snapshot | SSR page → inlined images → render | WEBP | ✓ | ✓ | | Echarts | Echarts SVG -> rasterize | WEBP | ✓ | ✓ | | Leaflet map PDF | GeoJSON route → OSM tiles → SVG → PDF | PDF | – | ✓ |


MCP server

Start the server (the . sets the sandbox root for file access and the SQLite store):

./zig-out/bin/zxp serve .

Connect Claude Desktop — add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "zexplorer": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "http://localhost:9984/mcp"]
    }
  }
}

Available tools:

| Tool | What it does | | ---- | ----------- | | render_html | Render an HTML string → PNG/WEBP/JPEG (base64 image in MCP response) | | render_markdown | Render GFM Markdown → image | | render_url | Fetch a URL, run its scripts, render → image | | run_script | Execute arbitrary JavaScript in the headless DOM+JS engine; returns text, JSON, or an image | | get_zxp_docs | Return API docs and worked examples — call this before writing a run_script | | store_save | Persist text or binary data (e.g. a rendered PNG) to a local SQLite store | | store_get | Retrieve a stored entry by name; data is an ArrayBuffer | | store_list | List store entries (metadata only) | | store_delete | Delete a store entry by name |

The typical LLM workflow is: call get_zxp_docs to learn the zxp.* API, then call run_script with composed JavaScript to scrape, render, or process data. store_* lets the LLM persist intermediate results across stateless tool calls.

Your local storage is just:

// zexplorer runs this instantly. No DB connection setup needed.
const pageTitle = document.querySelector('title').textContent;
zxp.store.save("last_scraped_title", pageTitle); // Saved instantly to SQLite
zxp.store.get("last_scraped_title");

Smoke-test with curl:

# Text result
curl -s -X POST http://localhost:9984/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"run_script","arguments":{"script":"const a=10,b=32; `The answer is ${a+b}`"}}}'
# → {"jsonrpc":"2.0","id":1,"result":{"content":[{"type":"text","text":"The answer is 42"}]}}

# Image result
curl -s -X POST http://localhost:9984/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"render_html","arguments":{"html":"<h1 style=\"color:red\">Hello MCP!</h1>","width":400}}}'
# → {"jsonrpc":"2.0","id":2,"result":{"content":[{"type":"image","data":"iVBORw0KGgo....","mimeType":"image/png"}]}}

Use run_script to build a D3 chart from CSV data — the LLM composes the JS and gets an image back:

Source: https://github.com/ndrean/zexplorer/blob/main/src/examples/d3_chart/example_d3.js

curl -s -X POST http://localhost:9984/run --data-binary @src/examples/d3_chart/output_chart.webp
<img src="https://github.com/ndrean/zexplorer/blob/main/src/examples/d3_chart/output_chart.webp" alt="output_chart" width="500"> <br>

Generative template

You want to use an LLM to generate some HTML with CSS for us as the engine has builtin support for SSE' text/event-stream' content-type support.

We showcase the local provider ollama. We used the 4.7G model "qwen2.5-coder:7b". This can be extended to any provider (OpenAI, Anthropic, Gemini) if you adapt the LLM response parsing.

  • Our local LLM ollama is up and running: curl -s http://localhost:11434/api/tags | head -c 200 returns {"models":[{"name":"qwen2.5-coder:7b",....}.
  • The dev-server is up and running: ./zig-out.bin/zxp server .

First example: render a generative <img> component.

<img src="http://localhost:9984/render_llm?prompt=3+metric+cards+Revenue+%2412k+Users+340+MRR+%244.2k&width=600&format=png">

Let's "live-serve" this component in a browser. The browser will send a GET request to the dev-servern which. in turn will reach the LLM. Depending upon the mood of the LLML, you can get this image:

<img src="https://github.com/ndrean/zexplorer/blob/main/src/examples/generative/img_embedded.png" alt="generative template" width="400">

Second example: interactive generative form

The HTML below is a HTML form where we select a more elaborated prompt. On submission, a JavaScript snippet will POST the prompt to the dev-server "/render_llm" endpoint.

Source: https://github.com/ndrean/zexplorer/blob/main/src/examples/generative/render_llm_demo.html

<details><summary>a FORM textarea INPUT populated by four buttons with a submit button</summary>
<section>
  <h2>Interactive prompt (POST → base64)</h2>

  <div class="quick-prompts">
    <button type="button" data-prompt="A responsive table with 3 columns: Name, Status, Amount. 5 realistic sample rows, blue header.">Table</button>
    <button type="button" data-prompt="A dashboard card grid: 4 KPI cards (Revenue $42k ↑12%, Users 3.4k ↑5%, Churn 2.1% ↓0.3%, MRR $8.2k ↑18%). Clean white cards, subtle shadows.">KPI cards</button>
    <button type="button" data-prompt="A horizontal progress tracker with 4 steps: Ordered, Processing, Shipped, Delivered. Step 2 is active in blue.">Progress steps</button>
    <button type="button" data-prompt="A minimal invoice: logo placeholder, billed-to block, line-item table (Qty, Description, Unit Price, Total), grand total row.">Invoice</button>
  </div>

  <form id="gen-form">
    <textarea
      name="prompt"
      placeholder="Describe a UI component to generate
View on GitHub
GitHub Stars5
CategoryDevelopment
Updated26d ago
Forks1

Languages

Zig

Security Score

90/100

Audited on Mar 8, 2026

No findings