SkillAgentSearch skills...

Autoresttest

Automated black-box REST API testing using graph-based modeling, LLMs, and multi-agent reinforcement learning.

Install / Use

/learn @selab-gatech/Autoresttest
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

AutoRestTest

Introduction

AutoRestTest is a complete testing software for automated API testing that combines the utility of graph theory, Large Language Models (LLMs), and multi-agent reinforcement learning (MARL) to parse the OpenAPI Specification and create enhanced comprehensive test cases. AutoRestTest specifically supports OpenAPI Specification 3.0.

Watch this demonstration video of AutoRestTest to learn how it solves complex challenges in automated REST API testing, as well as its configuration, execution steps, and output.

[!NOTE] Following the release of the demonstration video, the code base has been refactored. Refer to this README.md for the most current setup and execution details.

<p align="center"> <a href="https://www.youtube.com/watch?v=VVus2W8rap8"> <img src="https://img.youtube.com/vi/VVus2W8rap8/0.jpg" alt="Watch the video"> </a> </p>

The program uses LLMs for natural-language processing during the creation of the reinforcement learning tables and graph edges. AutoRestTest supports any LLM from an OpenAI-API compatible provider, including OpenAI, OpenRouter, Azure OpenAI, and local models (LocalAI, LM Studio, vLLM, Ollama, etc.).

[!Important] An API key from your chosen LLM provider is required. The cost per execution depends on your provider and model choice. For reference, when testing an average API with ~15 operations using GPT-4o-mini, the cost was approximately $0.1.

Terminal User Interface (TUI)

AutoRestTest features a modern, interactive terminal user interface built with Rich that provides:

Interactive Configuration Wizard

On startup, AutoRestTest launches an interactive configuration wizard that allows you to:

  • Select API specifications from discovered files or enter custom paths
  • Choose LLM providers (OpenAI, OpenRouter, Local) with pre-configured model options
  • Configure test duration with convenient presets (5, 10, 20, 30, 60 minutes)
  • Adjust Q-learning parameters (learning rate, discount factor, exploration)
  • Toggle caching options for faster repeated runs
  • Override API URLs for local testing

The wizard uses sensible defaults from configurations.toml, so you can simply press Enter to accept defaults or customize any setting.

Live Execution Dashboard

During request generation, a real-time dashboard displays:

╔══════════════════════════════════════════════════════════════════════════════╗
║ [STATUS] Request Generation                                                   ║
╠══════════════════════════════════════════════════════════════════════════════╣
║  ┌──────────────────────────────────────────────────────────────────────┐    ║
║  │ Time Elapsed: 00:15:32            Time Remaining: 00:04:28           │    ║
║  │ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.8%    │    ║
║  └──────────────────────────────────────────────────────────────────────┘    ║
║                                                                               ║
║  Successfully Processed (2xx) Operations:          1,482                     ║
║  Operation Coverage:                               85.2%                     ║
║  Unique Server Errors (5xx):                       7                         ║
║  Total Requests Sent:                              3,847                     ║
║                                                                               ║
║  Status Code Distribution                                                     ║
║  ┌──────────────────────────────────────────────────────────────────────┐    ║
║  │ 200: ████████████████████████████████████████████░░░░░░░░  1,482    │    ║
║  │ 404: ████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░    150    │    ║
║  │ 500: █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░      7    │    ║
║  └──────────────────────────────────────────────────────────────────────┘    ║
║                                                                               ║
║  Current Operation: GET /api/users/{id}                                       ║
║                                                                               ║
║  [COST] Total LLM usage: $0.11 USD                                           ║
╚══════════════════════════════════════════════════════════════════════════════╝

Features include:

  • Real-time progress tracking with elapsed and remaining time
  • Color-coded status codes (green for 2xx, orange for 4xx, red for 5xx)
  • Visual progress bars for status code distribution
  • Live cost estimation based on token usage
  • Current operation indicator showing what's being tested
  • Mutation counter tracking the number of fuzzing mutations applied

Q-Table Initialization Progress

During the Q-table initialization phase, the Value Agent and Header Agent (if enabled) perform LLM calls to generate test values for each API operation. A live progress display shows:

╭──── ⚙ Value Agent Q-Table Generation ────╮
│                                           │
│   ━━━━━━━━━━━━━━━━━━━━━━━───────  67.5%  │
│                                           │
│   Operations: 27/40  │  Elapsed: 03:45   │
│                                           │
│   ▶ POST /api/users                       │
│                                           │
╰───────────────────────────────────────────╯

This progress display appears for both Value Agent and Header Agent initialization, showing:

  • Visual progress bar with percentage complete
  • Operation count (completed/total)
  • Elapsed time
  • Current operation being processed

TUI Command Line Options

| Option | Description | |--------|-------------| | --skip-wizard | Skip configuration wizard, use configurations.toml directly | | --quick | Quick setup wizard (essential settings only) | | -s, --spec PATH | Override specification path | | -t, --time SECONDS | Override test duration | | --width N | Set TUI display width (default: 100) |

Examples:

# Full interactive mode (default)
poetry run autoresttest

# Quick setup - only essential settings
poetry run autoresttest --quick

# Skip wizard entirely, use config file
poetry run autoresttest --skip-wizard

# Override spec and duration via CLI
poetry run autoresttest -s specs/original/oas/spotify.yaml -t 600

Installation

We recommend using Poetry with pyproject.toml for dependency management and scripts. A poetry.lock file pins exact versions.

Steps:

  1. Clone the repository.
  2. Ensure Python 3.10.x is available (project targets >=3.10,<3.11).
  3. Install dependencies with Poetry (uses poetry.lock if present):
    • poetry install
  4. Create a .env file in the project root and add:
    • API_KEY='<YOUR_API_KEY>'

Alternatives (provided but not recommended):

  • pip install -r requirements.txt
  • conda env create -f autoresttest.yaml

Optionally, if the user wants to test specific APIs, they can install their OpenAPI Specification files within a folder in the root directory. For convenience, we have provided a large array of OpenAPI Specifications for popular and widely-used APIs. These Specifications can be seen in the aratrl-openapi and specs directories.

Running Local Services

AutoRestTest includes support for running local REST API services for testing. Some services require building before use:

JDK 8_1 Services (features-service, ncs, scs)

These services must be built before first use:

# Make the build script executable
chmod +x services/build_jdk8_1_services.sh

# Build all JDK 8_1 services
bash services/build_jdk8_1_services.sh

After building, start a service:

cd services
python3 run_service_mac.py features-service no_token

For detailed instructions, see FEATURES_SERVICE_SETUP.md.

Other Supported Services

Other services like genome-nexus, language-tool, youtube, etc. can be run without building:

cd services
python3 run_service_mac.py <service-name> <token>

See services/README.md for the full list of supported services.

At this point, the installation step is complete and the software can be executed. However, it is important that the following configuration steps are completed for purposeful execution.

Configuration

There is a wide array of configuration options available within the codebase. All configuration options are easily accessible via a single TOML file at the project root: configurations.toml.

Below are the relevant settings and where to find them in configurations.toml.

1. Specifying the API Specification

If you intend to use your own OpenAPI Specification file as described in the Installation section, set the relative path (from the project root) to that file in configurations.toml under [spec].location.

Only .yaml and .json files are supported (OpenAPI 3.0). Example: aratrl-openapi/market2.yaml.

Specification parsing is handled by Prance. If your spec has circular/self-referencing $ref chains, you can tune the resolver behavior with:

  • [spec].recursion_limit (default: 1) — the maximum number of times a circular reference may appear in the resolution stack before a placeholder schema is substituted; a value of 1 means a self-referencing element is resolved once before being replaced.
  • [spec].strict_validation (default: true) — when true, the OpenAPI spec is strictly validated and parsing stops on errors; when false, invalid sections are skipped where possible so execution can continue.

2. Configuring Reinforcement Learning Parameters

Configure Q-learning parameters in configurations.toml:

  • [q_learning].learning_rate (default: 0.1)
  • [q_learning].discount_factor (default: 0.9)
  • [q_learning].max_exploration (epsilon; default: 1, decays over time to 0.1)

Instead of limiting episodes, the program limits RL iterations usi

Related Skills

View on GitHub
GitHub Stars45
CategoryDevelopment
Updated29d ago
Forks14

Languages

HTML

Security Score

95/100

Audited on Mar 4, 2026

No findings