SkillAgentSearch skills...

TextAttack

TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/

Install / Use

/learn @QData/TextAttack

README

<h1 align="center">TextAttack 🐙</h1> <p align="center">Generating adversarial examples for NLP models</p> <p align="center"> <a href="https://textattack.readthedocs.io/">[TextAttack Documentation on ReadTheDocs]</a> <br> <br> <a href="#about">About</a> • <a href="#setup">Setup</a> • <a href="#usage">Usage</a> • <a href="#design">Design</a> <br> <br> <a target="_blank"> <img src="https://github.com/QData/TextAttack/workflows/Github%20PyTest/badge.svg" alt="Github Runner Covergae Status"> </a> <a href="https://badge.fury.io/py/textattack"> <img src="https://badge.fury.io/py/textattack.svg" alt="PyPI version" height="18"> </a> </p> <img src="https://jxmo.io/files/textattack.gif" alt="TextAttack Demo GIF" style="display: block; margin: 0 auto;" />

About

TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.

If you're looking for information about TextAttack's menagerie of pre-trained models, you might want the TextAttack Model Zoo page.

Slack Channel

For help and realtime updates related to TextAttack, please join the TextAttack Slack!

Why TextAttack?

There are lots of reasons to use TextAttack:

  1. Understand NLP models better by running different adversarial attacks on them and examining the output
  2. Research and develop different NLP adversarial attacks using the TextAttack framework and library of components
  3. Augment your dataset to increase model generalization and robustness downstream
  4. Train NLP models using just a single command (all downloads included!)

Setup

Installation

You should be running Python 3.6+ to use this package. A CUDA-compatible GPU is optional but will greatly improve code speed. TextAttack is available through pip:

pip install textattack

Once TextAttack is installed, you can run it via command-line (textattack ...) or via python module (python -m textattack ...).

Tip: TextAttack downloads files to ~/.cache/textattack/ by default. This includes pretrained models, dataset samples, and the configuration file config.yaml. To change the cache path, set the environment variable TA_CACHE_DIR. (for example: TA_CACHE_DIR=/tmp/ textattack attack ...).

Usage

Help: textattack --help

TextAttack's main features can all be accessed via the textattack command. Two very common commands are textattack attack <args>, and textattack augment <args>. You can see more information about all commands using

textattack --help

or a specific command using, for example,

textattack attack --help

The examples/ folder includes scripts showing common TextAttack usage for training models, running attacks, and augmenting a CSV file.

The documentation website contains walkthroughs explaining basic usage of TextAttack, including building a custom transformation and a custom constraint..

Running Attacks: textattack attack --help

The easiest way to try out an attack is via the command-line interface, textattack attack.

Tip: If your machine has multiple GPUs, you can distribute the attack across them using the --parallel option. For some attacks, this can really help performance. (If you want to attack Keras models in parallel, please check out examples/attack/attack_keras_parallel.py instead)

Here are some concrete examples:

TextFooler on BERT trained on the MR sentiment classification dataset:

textattack attack --recipe textfooler --model bert-base-uncased-mr --num-examples 100

DeepWordBug on DistilBERT trained on the Quora Question Pairs paraphrase identification dataset:

textattack attack --model distilbert-base-uncased-cola --recipe deepwordbug --num-examples 100

Beam search with beam width 4 and word embedding transformation and untargeted goal function on an LSTM:

textattack attack --model lstm-mr --num-examples 20 \
 --search-method beam-search^beam_width=4 --transformation word-swap-embedding \
 --constraints repeat stopword max-words-perturbed^max_num_words=2 embedding^min_cos_sim=0.8 part-of-speech \
 --goal-function untargeted-classification

Tip: Instead of specifying a dataset and number of examples, you can pass --interactive to attack samples inputted by the user.

Attacks and Papers Implemented ("Attack Recipes"): textattack attack --recipe [recipe_name]

We include attack recipes which implement attacks from the literature. You can list attack recipes using textattack list attack-recipes.

To run an attack recipe: textattack attack --recipe [recipe_name]

<img src="docs/_static/imgs/overview.png" alt="TextAttack Overview" style="display: block; margin: 0 auto;" /> <table style="width:100%" border="1"> <thead> <tr class="header"> <th><strong>Attack Recipe Name</strong></th> <th><strong>Goal Function</strong></th> <th><strong>ConstraintsEnforced</strong></th> <th><strong>Transformation</strong></th> <th><strong>Search Method</strong></th> <th><strong>Main Idea</strong></th> </tr> </thead> <tbody> <tr><td style="text-align: center;" colspan="6"><strong><br>Attacks on classification tasks, like sentiment classification and entailment:<br></strong></td></tr> <tr> <td><code>a2t</code> <span class="citation" data-cites="yoo2021a2t"></span></td> <td><sub>Untargeted {Classification, Entailment}</sub></td> <td><sub>Percentage of words perturbed, Word embedding distance, DistilBERT sentence encoding cosine similarity, part-of-speech consistency</sub></td> <td><sub>Counter-fitted word embedding swap (or) BERT Masked Token Prediction</sub></td> <td><sub>Greedy-WIR (gradient)</sub></td> <td ><sub>from (["Towards Improving Adversarial Training of NLP Models" (Yoo et al., 2021)](https://arxiv.org/abs/2109.00544))</sub></td> </tr> <tr> <td><code>alzantot</code> <span class="citation" data-cites="Alzantot2018GeneratingNL Jia2019CertifiedRT"></span></td> <td><sub>Untargeted {Classification, Entailment}</sub></td> <td><sub>Percentage of words perturbed, Language Model perplexity, Word embedding distance</sub></td> <td><sub>Counter-fitted word embedding swap</sub></td> <td><sub>Genetic Algorithm</sub></td> <td ><sub>from (["Generating Natural Language Adversarial Examples" (Alzantot et al., 2018)](https://arxiv.org/abs/1804.07998))</sub></td> </tr> <tr> <td><code>bae</code> <span class="citation" data-cites="garg2020bae"></span></td> <td><sub>Untargeted Classification</sub></td> <td><sub>USE sentence encoding cosine similarity</sub></td> <td><sub>BERT Masked Token Prediction</sub></td> <td><sub>Greedy-WIR</sub></td> <td ><sub>BERT masked language model transformation attack from (["BAE: BERT-based Adversarial Examples for Text Classification" (Garg & Ramakrishnan, 2019)](https://arxiv.org/abs/2004.01970)). </td> </tr> <tr> <td><code>bert-attack</code> <span class="citation" data-cites="li2020bertattack"></span></td> <td><sub>Untargeted Classification</td> <td><sub>USE sentence encoding cosine similarity, Maximum number of words perturbed</td> <td><sub>BERT Masked Token Prediction (with subword expansion)</td> <td><sub>Greedy-WIR</sub></td> <td ><sub> (["BERT-ATTACK: Adversarial Attack Against BERT Using BERT" (Li et al., 2020)](https://arxiv.org/abs/2004.09984))</sub></td> </tr> <tr> <td><code>checklist</code> <span class="citation" data-cites="Gao2018BlackBoxGO"></span></td> <td><sub>{Untargeted, Targeted} Classification</sub></td> <td><sub>checklist distance</sub></td> <td><sub>contract, extend, and substitutes name entities</sub></td> <td><sub>Greedy-WIR</sub></td> <td ><sub>Invariance testing implemented in CheckList . (["Beyond Accuracy: Behavioral Testing of NLP models with CheckList" (Ribeiro et al., 2020)](https://arxiv.org/abs/2005.04118))</sub></td> </tr> <tr> <td> <code>clare</code> <span class="citation" data-cites="Alzantot2018GeneratingNL Jia2019CertifiedRT"></span></td> <td><sub>Untargeted {Classification, Entailment}</sub></td> <td><sub>USE sentence encoding cosine similarity</sub></td> <td><sub>RoBERTa Masked Prediction for token swap, insert and merge</sub></td> <td><sub>Greedy</sub></td> <td ><sub>["Contextualized Perturbation for Textual Adversarial Attack" (Li et al., 2020)](https://arxiv.org/abs/2009.07502))</sub></td> </tr> <tr> <td><code>deepwordbug</code> <span class="citation" data-cites="Gao2018BlackBoxGO"></span></td> <td><sub>{Untargeted, Targeted} Classification</sub></td> <td><sub>Levenshtein edit distance</sub></td> <td><sub>{Character Insertion, Character Deletion, Neighboring Character Swap, Character Substitution}</sub></td> <td><sub>Greedy-WIR</sub></td> <td ><sub>Greedy replace-1 scoring and multi-transformation character-swap attack (["Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers" (Gao et al., 2018)](https://arxiv.org/abs/1801.04354)</sub></td> </tr> <tr> <td> <code>faster-alzantot</code> <span class="citation" data-cites="Alzantot2018GeneratingNL Jia2019CertifiedRT"></span></td> <td><sub>Untargeted {Classification, Entailment}</sub></td> <td><sub>Percentage of words perturbed, Language Model perplexity, Word embedding distance</sub></td> <td><sub>Counter-fitted word embedding swap</sub></td> <td><sub>Genetic Algorithm</sub></td> <td ><sub>Modified, faster version of the Alzantot et al. genetic algorithm, from (["Certified Robustness to Adversarial Word Substitutions" (Jia et al., 2019)](https://arxiv.org/abs/1909.00986))</sub></td> </tr> <tr> <td><code>hotflip</code> (word swap) <span class="citation" data-cites="Ebrahimi2017HotFlipWA"></span></td> <td><sub>Untargeted Classification</sub></td> <td><sub>Word Embedding Cosine Similarity, Part-of-speech match, Number of words perturbed</sub></td> <td><sub>Gradient-Based Word Swap</sub

Related Skills

View on GitHub
GitHub Stars3.4k
CategoryEducation
Updated2d ago
Forks440

Languages

Python

Security Score

100/100

Audited on Mar 26, 2026

No findings