SkillAgentSearch skills...

ManulEngine

ā€‹šŸ± Deterministic, DSL-first web and desktop automation on top of Playwright, with explainable heuristics and optional local AI fallback. 🐾

Install / Use

/learn @alexbeatnik/ManulEngine
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="https://raw.githubusercontent.com/alexbeatnik/ManulEngine/main/images/manul.png" alt="ManulEngine mascot" width="180" /> </p>

ManulEngine

PyPI PyPI Downloads VS Code Marketplace Status: Alpha

Deterministic, DSL-first web and desktop automation on top of Playwright, with explainable heuristics, a standalone Python API, and optional local AI fallback.

Status

Status: Alpha. Developed by a single person.

This project is actively being battle-tested. Bugs are expected, APIs may evolve, and there are no promises about stability or production readiness. The core claim is transparency: when a step works, you should understand why; when it fails, you should have enough signal to diagnose it.

Core Philosophy

ManulEngine is an interpreter for the .hunt DSL. A hunt file expresses intent in plain English, the runtime snapshots the DOM, ranks candidates with heuristics, and executes through Playwright.

Determinism first

The primary resolver is not an LLM. It is a deterministic scoring system backed by DOM traversal and weighted heuristics:

  • DOM collection uses a native TreeWalker in injected JavaScript.
  • Candidate ranking is handled by DOMScorer.
  • Scores are normalized on a 0.0 to 1.0 confidence scale.
  • Weighted channels include cache, semantics, text, attributes, and proximity.

That means the engine can explain more than "element not found". It can show whether a target lost because text affinity was weak, semantic alignment was poor, the candidate was hidden, or another channel outweighed it.

Transparency instead of AI magic

The recommended default is heuristics-only mode:

{
  "model": null,
  "browser": "chromium",
  "controls_cache_enabled": true,
  "semantic_cache_enabled": true
}

When a local Ollama model is enabled, it acts as a fallback for ambiguous cases rather than the primary execution path.

Dual-persona workflow

The authoring model is intentionally split across two layers:

  • QA, analysts, and operators write plain-English .hunt steps.
  • SDETs extend those flows with Python hooks, lifecycle setup, and custom controls when a UI or backend dependency should not be forced into the generic DSL path.

The intended boundary is straightforward:

  • Keep business intent and readable flow in the DSL.
  • Keep environment setup, backend interaction, and custom widget handling in Python.

Why ManulEngine

Most browser automation tools sold as AI automation are cloud wrappers around selectors and retries. ManulEngine is aiming at the opposite design.

Deterministic first, not AI-first

The runtime resolves DOM elements through a native JavaScript TreeWalker plus a weighted DOMScorer. That gives you a repeatable result from page state plus step text, not from prompt variance.

Explainable instead of opaque

When the engine chooses the wrong target, you should be able to inspect the actual scoring channels that drove the result. The point is not just success cases. The point is actionable failure analysis.

One artifact for two personas

QA, ops, and analysts can keep the flow readable in .hunt. SDETs can attach Python, lifecycle hooks, and custom controls without splitting the scenario into two separate systems.

Optional AI fallback, off by default

"model": null remains the recommended default. When a local Ollama model is enabled, it is a fallback for ambiguous cases, not the primary execution engine.

Four Automation Pillars

ManulEngine is not only a test runner. The same runtime and the same DSL can cover four adjacent use cases:

  1. QA and E2E testing
  2. RPA workflows
  3. Synthetic monitoring
  4. AI agent execution targets

QA and E2E testing

Write plain-English flows, verify outcomes, attach reports and screenshots when needed, and keep selectors out of the test source.

RPA workflows

Use the same DSL to log into portals, download files, fill forms, extract values, and hand work to Python when a backend or filesystem step is involved.

Synthetic monitoring

Pair .hunt files with @schedule: and manul daemon to run scheduled health checks with the same execution model as your test flows.

AI agent execution targets

If an external agent needs to drive the browser, .hunt is a safer constrained target than raw Playwright code because the runtime still owns validation, scoring, retries, and reporting.

Key Features

Explainability layers

The runtime and companion VS Code extension expose multiple explainability layers instead of forcing you to inspect a terminal dump.

CLI: --explain

manul --explain tests/saucedemo.hunt
manul --explain --headless tests/ --html-report

That mode prints candidate rankings and per-channel scoring breakdowns for each resolved step.

Representative CLI explain output:

ā”Œā”€ EXPLAIN: Target = "Login"
│  Step: Click the 'Login' button
│
│  #1 <button> "Login"
│     total:      0.593
│     text:       0.281
│     attributes: 0.050
│     semantics:  0.225
│     proximity:  0.037
│     cache:      0.000
│
└─ Decision: selected "Login" with score 0.593

VS Code: title bar action

During a debug pause, the extension exposes Explain Current Step in the editor title bar so you can request explanation data for the paused step without leaving the editor.

VS Code: hover tooltips in debug mode

Run a hunt in Debug mode through Test Explorer, then hover over any resolved step line in the .hunt file. The extension shows the stored per-channel breakdown directly on that line.

Desktop and Electron automation via executable_path

ManulEngine is not limited to browser tabs. Because it runs on Playwright, it can also drive Electron-based desktop applications.

Set executable_path in the runtime config and use OPEN APP instead of NAVIGATE:

{
  "model": null,
  "browser": "chromium",
  "executable_path": "/path/to/YourElectronApp"
}
@context: Desktop smoke test
@title: Desktop Smoke

STEP 1: Attach to the window
    OPEN APP
    VERIFY that 'Welcome' is present

STEP 2: Exercise the main screen
    Click the 'Settings' button
    VERIFY that 'Preferences' is present

DONE.

Smart recorder for native controls

The recorder is meant to capture intent, not just raw pointer activity. A concrete example is native <select> handling: the injected recorder observes semantic change events and emits DSL such as Select 'Option' from 'Dropdown' instead of recording a brittle chain of low-level clicks on <option> elements.

Python hooks and custom controls

When the generic resolver should not be forced to understand a bespoke widget, ManulEngine provides an explicit SDET escape hatch:

  • [SETUP] / [TEARDOWN] hooks for environment and data setup.
  • CALL PYTHON for backend lookups or computed values.
  • @before_all / @after_all lifecycle hooks for suite-wide orchestration.
  • @custom_control handlers for complex UI elements.

That balance is intentional: keep the common path readable, and keep the edge cases programmable.

Public Python API (ManulSession)

For users who prefer writing automation in pure Python, the runtime exports ManulSession: an async context manager that owns the Playwright lifecycle and exposes clean methods for navigation, clicks, fills, verifications, and extraction.

from manul_engine import ManulSession

async with ManulSession(headless=True) as session:
    await session.navigate("https://example.com/login")
    await session.fill("Username field", "admin")
    await session.fill("Password field", "secret")
    await session.click("Log in button")
    await session.verify("Welcome")
    price = await session.extract("Product Price")

ManulSession can also execute raw DSL snippets against the already-open browser via run_steps():

async with ManulSession() as session:
    await session.navigate("https://example.com")
    result = await session.run_steps("""
        STEP 1: Search
            Fill 'Search' with 'ManulEngine'
            PRESS Enter
            VERIFY that 'Results' is present
    """)
    assert result.status == "pass"

State, variables, and scope

Variable handling is strict rather than ad hoc. The runtime supports @var:, EXTRACT, SET, and CALL PYTHON ... into {var} with deterministic placeholder substitution in downstream steps.

Useful patterns:

  • @var: for static test data at the top of the file.
  • EXTRACT ... into {var} for values pulled from the UI.
  • SET {var} = value for mid-run assignment.
  • CALL PYTHON module.func into {var} for backend-generated runtime values such as OTPs or tokens.

Scope precedence is explicit:

| Priority | Scope | Source | |---|---|---| | 1 | Row vars | @data: iteration values | | 2 | Step vars | EXTRACT, SET, CALL PYTHON ... into {var} | | 3 | Mission vars | @var: declarations | | 4 | Global vars | lifecycle hooks and process-level state |

Tags and data-driven runs

The runtime also supports selective execution and data-driven loops without changing the DSL model.

@tags: smoke, auth
@data: users.csv
manul tests/ --tags smoke

Lifecycle orchestration and hooks

There are two levels of Python orchestration:

  • Per-file [SETUP] / [TEARDOWN] and inline CALL PYTHON for file-local setup or backend ca
View on GitHub
GitHub Stars4
CategoryDevelopment
Updated4d ago
Forks0

Languages

Python

Security Score

85/100

Audited on Mar 26, 2026

No findings