SkillAgentSearch skills...

AUTOTEST

An open-source Generative AI (GenAI) framework and application designed to generate automated test cases and python Selenium scripts after dynamically analysing the web-page using large language models (LLMs).

Install / Use

/learn @mindfiredigital/AUTOTEST

README

AUTOTEST: Automated test case and Selenium script generation using LLM

<img src="./autotest_image.jpg" alt="Project Logo" width="100" height="auto"> <!-- Include a project logo or banner here, if applicable -->

Table of Contents

Description

An open-source Generative AI (GenAI) application designed to generate automated test cases and python Selenium scripts after dynamically analyzing the web-page using large language models (LLMs). AI-driven testing tool which leverages AI and machine learning for test case generation, test script optimization, and automated test execution. This application serves as an testing automation tool which performs recursive unique internal url extraction from the base url with BFS upto a specified depth. Then analyze each page to extract the page metadata, generate page specific test cases and executable python selenium scripts. The valid set of data inputs required to test page authentication functionalities like login or registration can be provided in the auth_test_data.json file. This automation tool will generate comprehensive suite of test cases for both the positive and negative functionalities of the web-page.

Features

  • Support for All Open-Source LLM Models:

    • The application is built with comprehensive support for all major open-source and closed-source LLMs due to langchain's LLM abstraction layer. This allows users to select from a wide range of models, ensuring they can choose the one that best fits their specific needs and use cases.
  • Adaptable for any web-site or web-page

    • The LLM prompts for page analaysis, page-specific test case and selenium script generation are extremely generic which enables the system's adaptablity with any web-site or web-page.
  • URL extraction upto given depth parameter

    • Adjust the depth parameter in url_extract.py file to perform recursive unique url extraction upto that depth with Breadth-First-Search(BFS). The parameter can either be provided using CLI or directly in the function call itself.
  • Dyanamic data-driven testing

    • The valid and invalid set of test data can be provided in the auth_test_data.json file for data-driven test case generation. The test case will include the test data only if there is a requirement for authentication or fill-out forms on the page to be tested.
  • Context-aware test case generation

    • The page-metadata and HTML page-source are provided in the LLM prompt itself for ensuring page specific and context aware test case generation.
  • Dual model support for page analysis and selenium code generation

    • Currently the system utilizes two different models for page analysis along with test case generation and selenium script generation.
    • Analysis model: "gpt-4o-2024-11-20"
    • Selenium model: "gpt-4.1-2025-04-14"
    • Follow the same format given in llm_config.yaml file to introduce and utilize different models as per the requirement:
    model_provider: "openai"  # Options: openai, groq, anthropic, etc.
    model_settings:
      openai:
        analysis_model: "gpt-4o-2024-11-20" # For page analysis and test generation
        selenium_model: "gpt-4.1-2025-04-14" # For script generation
        temperature: 0.2
    
  • Robust web-page analysis technique using both static functions with common selenium selectors and LLM-powered analysis

    • The system performs robust and dynamic analysis of the target web-page to extract the page's metadata using a two-pronged approach.
    • In the first approach it leverages LLM-powered dynamic extraction of page metadata which are parsed in a valid specified JSON format.
    • Secondly it uses standard selenium selectors in static functions to extract the page metadata as a fallback if the LLM-powered dynamic metadata extraction fails.
  • Provision in generated code to wait for web-page's Javascript/AJAX to finish loading before starting test steps

    • In some pages the javascript and AJAX loading may take time, so the generated test script will contain a waiting provision to handle slow JS/AJAX framework loading.
  • Provision of manual intervention

    • If the web-page contains any security requirement like 'CAPTCHA', then the test process will wait for some time and allow the user to resolve the security requirement and then proceed the test.

Getting Started

Instructions on how to get started with your project, including installation, prerequisites, and basic usage.

Prerequisites

  • Understanding of test cases for web-pages
  • Knowledge of Selenium test scripts
  • Selenium version==4.15.2
  • webdriver-manager version==4.0.2
  • Make sure to have python version 3.10.12 installed
  • Ensure to have the necessary provider's(like openai or gemini) API key in .env file required to access the LLM. For example OPENAI_API_KEY=place-your-api-key-here

Installation

To install the autotest tool:

  • Clone the repo
  • Navigate to your project folder
  • Create virtualenv using:
    • For Linux/Ubuntu- python3 -m venv myenv
    • For Windows, open Command Prompt or PowerShell, and run- python -m venv myenv
    • For further reference, visit this LINK
  • Activate the virtual environment
    • For Linux/Ubuntu using source myenv/bin/activate
    • For Windows using myenv\Scripts\activate
  • Navigate to the main folder cd selenium-based-llm-model
  • Install requirements using requirements.txt pip install -r requirements.txt
  • If using playwright testing framework, then also install the following dependency: playwright install chromium
  • Provide the test data in auth_test_data.json file in the same directory.
  • Provide the name of the testing data file in the functional arguements of this given method:
def load_test_data(self, file_path="auth_test_data.json"):
      try:
          with open(file_path) as f:
              data = json.load(f)
              
          #validate(data, AUTH_DATA_SCHEMA)
          return data
      except Exception as e:
          self.logger.error(f"Failed to load test data: {str(e)}")
          return None
  • Follow exactly same schema for test data as specified under Usage heading.
  • Run the autotest application using command python autotest.py --url "url-to-be-tested" --loglevel DEBUG
  • Run the url extraction utility using the command python url_extract.py --url "base-url" --depth 1 --loglevel INFO
  • Provide the LLM configuration in llm_config.yaml file.

To install and build the autotest python package:

  • Clone the repo
  • Navigate to your project folder
  • Create virtualenv using:
    • For Linux/Ubuntu- python3 -m venv myenv
    • For Windows, open Command Prompt or PowerShell, and run- python -m venv myenv
    • For further reference, visit this LINK
  • Activate the virtual environment
    • For Linux/Ubuntu using source myenv/bin/activate
    • For Windows using myenv\Scripts\activate
  • Navigate to the main folder cd autotest_package
  • Install the package in Editable (Development) mode: pip install -e .
  • Run the CLI to check if your package is available (Verify Installation): autotest-cli --help
  • Or build a wheel for distribution: pip install build python -m build
  • This creates dist/ directory with: autotest_web_generator-1.0.0-py3-none-any.whl autotest_web_generator-1.0.0.tar.gz
  • Install the built package: pip install dist/autotest_web_generator-1.0.0-py3-none-any.whl
  • Now get started by using the package via CLI: autotest-cli --url https://example.com --loglevel DEBUG --wait-time "30 seconds" --testing-tool "selenium" --language "python"

Usage

The application is mainly a CLI(Command-line-interface) based tool. The primary command line parameters include:

  • Base URL

  • Depth parameter (optional, default url extraction depth is set as 1)

  • Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)

  • The logs for the entire testing process will be stored in the logs folder. Run the program in DEBUG logging level to check very detailed logs including the test cases parsed in JSON format.

  • The generated selenium test scripts in the test_scripts folder. Create the folder in the same directory as the source code.

  • The test results and reports are generated and stored in reports folder.

  • In the .env file(in same directory as source code), place your API key: OPENAI_API_KEY=your-api-key

If the user wants to perform a data-driven test for forms or login/registration pages, then the valid and invalid set of inputs must be provided in the auth_test_data.json in the following given schema.

Test data schema

AUTH_DATA_SCHEMA = {
    "type": "object",
    "properties": {
        "credentials": {
            "type": "object",
            "properties": {
                "valid": {"type": "object"},
                "invalid": {"type": "object"}
            }
        },
        "registration_fields": {
            "type": "object",
            "patternProperties": {
                "^.*$": {
                    "type": "object",
                    "properties": {
                        "valid": {"type": "array"},
                        "invalid": {"type": "array"}
                    }
                }
        
View on GitHub
GitHub Stars38
CategoryDevelopment
Updated2d ago
Forks7

Languages

Python

Security Score

95/100

Audited on Apr 2, 2026

No findings