SkillAgentSearch skills...

JAW

JAW: A Graph-based Security Analysis Framework for Client-side JavaScript

Install / Use

/learn @SoheilKhodayari/JAW

README

<p align="center"> <a href="//soheilkhodayari.github.io/JAW/"> <img align="center" alt="JAW" src="docs/assets/logo.png" height="195"> </a> </p> <p align="center"> <span><b> J A W</b></span> </p> <p align="center"> <a href="https://soheilkhodayari.github.io/JAW/">Website</a> | <a href="https://github.com/SoheilKhodayari/JAW/tree/master/docs">Docs</a> | <a href="https://github.com/SoheilKhodayari/JAW/tree/master/docs/installation.md">Setup</a> | <a href="https://github.com/SoheilKhodayari/JAW/tree/master/crawler">Crawler</a> | <a href="#quick-start"> Quick Start </a> <a href="https://github.com/SoheilKhodayari/JAW/blob/master/analyses/example/example_analysis.py"> Docker (Example)</a> </p>

JAW

Node Platform Open Source Love Tweet

An open-source, prototype implementation of property graphs for JavaScript based on the esprima parser, and the EsTree SpiderMonkey Spec. JAW can be used for analyzing the client-side of web applications and JavaScript-based programs.

This project is licensed under GNU AFFERO GENERAL PUBLIC LICENSE V3.0. See here for more information.

JAW has a Github pages website available at <a href="https://soheilkhodayari.github.io/JAW/">https://soheilkhodayari.github.io/JAW/</a>.

Release Notes:

  • Sept 2025, JAW-v5: DOM gadget detection support via this fork.
  • Dec 2024, JAW-v4: JAW updated with open redirect detection queries.
  • Oct 2023, JAW-v3 (Sheriff): JAW updated to detect client-side request hijacking vulnerabilities.
  • July 2022, JAW-v2 (TheThing): JAW updated to its next major release with the ability to detect DOM Clobbering vulnerabilities. See JAW-V2 branch.
  • Dec 2020, JAW-v1 : first prototype version. See JAW-V1 branch.

Content

Overview of JAW

  1. Test Inputs
  2. Data Collection
  3. HPG Construction
  4. Analysis and Outputs

Setup

  1. Installation

Quick Start

  1. Running the Pipeline
  2. Quick Example
  3. Crawling and Data Collection
  4. Graph Construction
  5. Security Analysis
  6. Test Web Application

Further Information

  1. Detailed Documentation
  2. Contribution and Code of Conduct
  3. Academic Publication

Overview of JAW

The architecture of the JAW is shown below.

<p align="center"> <img align="center" width="900" src="docs/assets/architecture_v3.png"> </p>

Test Inputs

JAW can be used in two distinct ways:

  1. Arbitrary JavaScript Analysis: Utilize JAW for modeling and analyzing any JavaScript program by specifying the program's file system path.

  2. Web Application Analysis: Analyze a web application by providing a single seed URL.

Data Collection

  • JAW features several JavaScript-enabled web crawlers for collecting web resources at scale.

HPG Construction

  • Use the collected web resources to create a Hybrid Program Graph (HPG), which will be imported into a Neo4j database.

  • Optionally, supply the HPG construction module with a mapping of semantic types to custom JavaScript language tokens, facilitating the categorization of JavaScript functions based on their purpose (e.g., HTTP request functions).

Analysis and Outputs

  • Query the constructed Neo4j graph database for various analyses. JAW offers utility traversals for data flow analysis, control flow analysis, reachability analysis, and pattern matching. These traversals can be used to develop custom security analyses.

  • JAW also includes built-in traversals for detecting client-side CSRF, DOM Clobbering and request hijacking vulnerabilities.

  • The outputs will be stored in the same folder as that of input.

Setup

The installation script relies on the following prerequisites:

  • Latest version of npm package manager (node js)
  • Any stable version of python 3.x
  • Python pip package manager

Afterwards, install the necessary dependencies via:

$ ./install.sh

For detailed installation instructions, please see here.

Quick Start

Running the Pipeline

You can run an instance of the pipeline in a background screen via:

$ python3 -m run_pipeline --conf=config.yaml

The CLI provides the following options:

$ python3 -m run_pipeline -h

usage: run_pipeline.py [-h] [--conf FILE] [--site SITE] [--list LIST] [--from FROM] [--to TO]

This script runs the tool pipeline.

optional arguments:
  -h, --help            show this help message and exit
  --conf FILE, -C FILE  pipeline configuration file. (default: config.yaml)
  --site SITE, -S SITE  website to test; overrides config file (default: None)
  --list LIST, -L LIST  site list to test; overrides config file (default: None)
  --from FROM, -F FROM  the first entry to consider when a site list is provided; overrides config file (default: -1)
  --to TO, -T TO        the last entry to consider when a site list is provided; overrides config file (default: -1)

Input Config: JAW expects a .yaml config file as input. See config.yaml for an example.

Hint. The config file specifies different passes (e.g., crawling, static analysis, etc) which can be enabled or disabled for each vulnerability class. This allows running the tool building blocks individually, or in a different order (e.g., crawl all webapps first, then conduct security analysis).

Quick Example

For running a quick example demonstrating how to build a property graph and run Cypher queries over it, do:

$ python3 -m analyses.example.example_analysis --input=$(pwd)/data/test_program/test.js

Crawling and Data Collection

This module collects the data (i.e., JavaScript code and state values of web pages) needed for testing. If you want to test a specific JavaScipt file that you already have on your file system, you can skip this step.

JAW has crawlers based on Selenium (JAW-v1), Puppeteer (JAW-v2, v3) and Playwright (JAW-v3). For most up-to-date features, it is recommended to use the Puppeteer- or Playwright-based versions.

Playwright CLI with Foxhound

This web crawler employs foxhound, an instrumented version of Firefox, to perform dynamic taint tracking as it navigates through webpages. To start the crawler, do:

$ cd crawler
$ node crawler-taint.js --seedurl=https://google.com --maxurls=100 --headless=true --foxhoundpath=<optional-foxhound-executable-path>

The foxhoundpath is by default set to the following directory: crawler/foxhound/firefox which contains a binary named firefox.

Note: you need a build of foxhound to use this version. An ubuntu build is included in the JAW-v3 release.

Puppeteer CLI

To start the crawler, do:

$ cd crawler
$ node crawler.js --seedurl=https://google.com --maxurls=100 --browser=chrome --headless=true

See here for more information.

Selenium CLI

To start the crawler, do:

$ cd crawler/hpg_crawler
$ vim docker-compose.yaml # set the websites you want to crawl here and save
$ docker-compose build
$ docker-compose up -d

Please refer to the documentation of the hpg_crawler here for more information.

Graph Construction

HPG Construction CLI

To generate an HPG for a given (set of) JavaScript file(s), do:

$ node engine/cli.js  --lang=js --graphid=graph1 --input=/in/file1.js --input=/in/file2.js --output=$(pwd)/data/out/ --mode=csv

optional arguments:
  --lang: 	language of the input program
  --graphid:  an identifier for the generated HPG
  --input: 	path of the input program(s)
  --output: 	path of the output HPG, must be i
  --mode: 	determines the output format (csv or graphML)

HPG Import CLI

To import an HPG inside a neo4j graph database (docker instance), do:

$ python3 -m hpg_neo4j.hpg_import --rpath=<path-to-the-folder-of-the-csv-files> --id=<xyz> --nodes=<nodes.csv> --edges=<rels.csv>
$ python3 -m hpg_neo4j.hpg_import -h

usage: hpg_import.py [-h] [--rpath P] [--id I] [-

Related Skills

View on GitHub
GitHub Stars119
CategoryDevelopment
Updated24d ago
Forks21

Languages

JavaScript

Security Score

100/100

Audited on Mar 10, 2026

No findings