SkillAgentSearch skills...

Promptbench

A unified evaluation framework for large language models

Install / Use

/learn @microsoftarchive/Promptbench

README

<div id="top"></div> <!-- *** Thanks for checking out the Best-README-Template. If you have a suggestion *** that would make this better, please fork the repo and create a pull request *** or simply open an issue with the tag "enhancement". *** Don't forget to give the project a star! *** Thanks again! Now go create something AMAZING! :D --> <!-- PROJECT SHIELDS --> <!-- *** I'm using markdown "reference style" links for readability. *** Reference links are enclosed in brackets [ ] instead of parentheses ( ). *** See the bottom of this document for the declaration of the reference variables *** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use. *** https://www.markdownguide.org/basic-syntax/#reference-style-links -->

[![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url]

<!-- ***[![MIT License][license-shield]][license-url] --> <!-- PROJECT LOGO --> <br /> <div align="center"> <a href="https://github.com/microsoft/promptbench"> <img src="imgs/promptbench_logo.png" alt="Logo" width="300"> </a> <!-- <h3 align="center">USB</h3> --> <p align="center"> <strong>PromptBench</strong>: A Unified Library for Evaluating and Understanding Large Language Models. <!-- <br /> <a href="https://github.com/microsoft/promptbench"><strong>Explore the docs »</strong></a> <br /> --> <br /> <a href="https://arxiv.org/abs/2312.07910">Paper</a> · <a href="https://promptbench.readthedocs.io/en/latest/">Documentation</a> · <a href="https://llm-eval.github.io/pages/leaderboard.html">Leaderboard</a> · <a href="https://llm-eval.github.io/pages/papers.html">More papers</a> </p> </div> <!-- TABLE OF CONTENTS --> <details> <summary>Table of Contents</summary> <ol> <li><a href="#news-and-updates">News and Updates</a></li> <li><a href="#introduction">Introduction</a></li> <li><a href="#installation">Installation</a></li> <li><a href="#usage">Usage</a></li> <li><a href="#supported-datasets-and-models">Datasets and Models</a></li> <li><a href="#benchmark-results">Benchmark Results</a></li> <li><a href="#acknowledgments">Acknowledgments</a></li> </ol> </details> <!-- News and Updates -->

News and Updates

  • [19/08/2024] Add DyVal 2 (ICML 2024).
  • [19/08/2024] Merge PromptEval, an efficient multi-prompt evaluation method, into this repository.
  • [26/05/2024] Add support for GPT-4o.
  • [13/03/2024] Add support for multi-modal models and datasets.
  • [05/01/2024] Add support for BigBench Hard, DROP, ARC datasets.
  • [16/12/2023] Add support for Gemini, Mistral, Mixtral, Baichuan, Yi models.
  • [15/12/2023] Add detailed instructions for users to add new modules (models, datasets, etc.) examples/add_new_modules.md.
  • [05/12/2023] Published promptbench 0.0.1.
<!-- Introduction -->

Introduction

PromptBench is a Pytorch-based Python package for Evaluation of Large Language Models (LLMs). It provides user-friendly APIs for researchers to conduct evaluation on LLMs. Check the technical report: https://arxiv.org/abs/2312.07910.

Code Structure

What does promptbench currently provide?

  1. Quick model performance assessment: We offer a user-friendly interface that allows for quick model building, dataset loading, and evaluation of model performance.
  2. Prompt Engineering: We implemented several prompt engineering methods. For example: Few-shot Chain-of-Thought [1], Emotion Prompt [2], Expert Prompting [3] and so on.
  3. Evaluating adversarial prompts: promptbench integrated prompt attacks [4], enabling researchers to simulate black-box adversarial prompt attacks on models and evaluate their robustness (see details here).
  4. Dynamic evaluation to mitigate potential test data contamination: we integrated the dynamic evaluation framework DyVal [5], which generates evaluation samples on-the-fly with controlled complexity.
  5. Efficient multi-prompt evaluation: We integrated the efficient multi-prompt evaluation method PromptEval [8]. This method uses the performance of LLMs on a small amount of data to build an IRT-like model. This model is then used to predict the performance of LLMs on unseen data. Tests on MMLU, BBH, and LMentry show that this method requires sampling only 5% of the data to reduce the error between estimated and actual performance to around 2%.
<!-- GETTING STARTED -->

Installation

Install via pip

We provide a Python package promptbench for users who want to start evaluation quickly. Simply run:

pip install promptbench

Note that the pip installation could be behind the recent updates. So, if you want to use the latest features or develop based on our code, you should install via GitHub.

Install via GitHub

First, clone the repo:

git clone git@github.com:microsoft/promptbench.git

Then,

cd promptbench

To install the required packages, you can create a conda environment:

conda create --name promptbench python=3.9
conda activate promptbench

then use pip to install required packages:

pip install -r requirements.txt

Note that this only installed basic python packages. For Prompt Attacks, you will also need to install TextAttack.

Usage

promptbench is easy to use and extend. Going through the examples below will help you get familiar with promptbench for quick use, evaluate existing datasets and LLMs, or create your own datasets and models.

Please see Installation to install promptbench first.

If promptbench is installed via pip, you can simply do:

import promptbench as pb

If you installed promptbench from git and want to use it in other projects:

import sys

# Add the directory of promptbench to the Python path
sys.path.append('/home/xxx/promptbench')

# Now you can import promptbench by name
import promptbench as pb

We provide tutorials for:

  1. evaluate models on existing benchmarks: please refer to the examples/basic.ipynb for constructing your evaluation pipeline. For a multi-modal evaluation pipeline, please refer to examples/multimodal.ipynb
  2. test the effects of different prompting techniques:
  3. examine the robustness for prompt attacks, please refer to examples/prompt_attack.ipynb to construct the attacks.
  4. use DyVal for evaluation: please refer to examples/dyval.ipynb to construct DyVal datasets.
  5. efficient multi-prompt evaluation using PromptEval: please refer to examples/efficient_multi_prompt_eval.ipynb

Implemented Components

PromptBench currently supports different datasets, models, prompt engineering methods, adversarial attacks, and more. You are welcome to add more.

Datasets

  • Language datasets:
    • GLUE: SST-2, CoLA, QQP, MRPC, MNLI, QNLI, RTE, WNLI
    • MMLU
    • BIG-Bench Hard (Bool logic, valid parentheses, date...)
    • Math
    • GSM8K
    • SQuAD V2
    • IWSLT 2017
    • UN Multi
    • CSQA (CommonSense QA)
    • Numersense
    • QASC
    • Last Letter Concatenate
  • Multi-modal datasets:
    • VQAv2
    • NoCaps
    • MMMU
    • MathVista
    • AI2D
    • ChartQA
    • ScienceQA

Models

Language models:

  • Open-source models:
    • google/flan-t5-large
    • databricks/dolly-v1-6b
    • Llama2 series
    • vicuna-13b, vicuna-13b-v1.3
    • Cerebras/Cerebras-GPT-13B
    • EleutherAI/gpt-neox-20b
    • Google/flan-ul2
    • phi-1.5 and phi-2
  • Proprietary models
    • PaLM 2
    • GPT-3.5
    • GPT-4
    • Gemini Pro

Multi-modal models:

  • Open-source models:
    • BLIP2
    • LLaVA
    • Qwen-VL, Qwen-VL-Chat
    • InternLM-XComposer2-VL
  • Proprietary models
    • GPT-4v
    • Gemini Pro Vision
    • Qwen-VL-Max, Qwen-VL-Plus

Prompt Engineering

  • Chain-of-thought (COT) [1]
  • EmotionPrompt [2]
  • Expert prompting [3]
  • Zero-shot chain-of-thought
  • Generated knowledge [6]
  • Least to most [7]

Adversarial Attacks

  • Character-level attack
    • DeepWordBug
    • TextBugger
  • Word-level attack
    • TextFooler
    • BertAttack
  • Sentence-level attack
    • CheckList
    • StressTest
  • Semantic-level attack
    • Human-crafted attack

Protocols and Analysis

  • Standard evaluation
  • Dynamic evaluation
  • Semantic evaluation
  • Benchmark results
  • Visualization analysis
  • Transferability analysis
  • Word frequency analysis

Benchmark Results

Please refer to our benchmark website for benchmark results on Prompt Attacks, Prompt Engineering and Dynamic Evaluation DyVal.

Acknowledgements

  • TextAttack
  • README Template
  • We thank the volunteers: Hanyuan Zhang, Lingrui Li, Yating Zhou for conducting the semantic preserving experiment in Prompt Attack benchmark.

Reference

[1] Jason Wei, et al. "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." arXiv preprint arXiv:2201.11903 (2022).

[2] Cheng Li, et al. "Emotionprompt: Leveraging psychology for large language models enhancement via emotional stimulus." arXiv preprint arXiv:2307.11760 (2023).

[3] BenFeng Xu, et al. "ExpertPrompting: Instructing Large Language Models to be Distinguished Experts" arXiv preprint arXiv:2305.14688 (2023).

[4] Zhu, Kaijie, et al. "PromptBench: Towards Evaluating the Robustness of Large Language Models on A

View on GitHub
GitHub Stars2.8k
CategoryDevelopment
Updated1d ago
Forks220

Languages

Python

Security Score

100/100

Audited on Mar 30, 2026

No findings