<h1 align="center">TransferAttack</h1>
About
<p align="center">
<a href="https://github.com/Trustworthy-AI-Group/TransferAttack/stargazers"> <img src="https://img.shields.io/github/stars/Trustworthy-AI-Group/TransferAttack.svg?style=popout-square" alt="GitHub stars"></a>
<a href="https://github.com/Trustworthy-AI-Group/TransferAttack/issues"> <img src="https://img.shields.io/github/issues/Trustworthy-AI-Group/TransferAttack.svg?style=popout-square" alt="GitHub issues"></a>
<a href="https://github.com/Trustworthy-AI-Group/TransferAttack/forks"> <img src="https://img.shields.io/github/forks/Trustworthy-AI-Group/TransferAttack.svg?style=popout-square" alt="GitHub forks"></a>
</p>
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation.

We also release a list of papers about transfer-based attacks here.
Why TransferAttack
There are a lot of reasons for TransferAttack, such as:
- A benchmark for evaluating new transfer-based attacks: TransferAttack categorizes existing transfer-based attacks into several types and fairly evaluates various transfer-based attacks under the same setting.
- Evaluate the robustness of deep models: TransferAttack provides a plug-and-play interface to verify the robustness of models, such as CNNs and ViTs.
- A summary of transfer-based attacks: TransferAttack reviews numerous transfer-based attacks, making it easy to get the whole picture of transfer-based attacks for practitioners.
Citation
If our paper or this code is useful for your research, please cite our paper:
@article{wang2026devling,
title={{Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation}},
author={Xiaosen Wang and Zhijin Ge and Bohan Liu and Zheng Fang and Fengfan Zhou and Ruixuan Zhang and Shaokang Wang and Yuyang Luo},
journal={arXiv preprint arXiv:2602.23117},
year={2026}
}
Requirements
- Python >= 3.6
- PyTorch >= 1.12.1
- Torchvision >= 0.13.1
- timm >= 0.6.12
pip install -r requirements.txt
Usage
We adopt an academic-standard ImageNet-compatible dataset comprising 1,000 PNG images for our experiments. Download the data from
or [
into /path/to/data. Then you can execute the attack as follows:
python main.py --input_dir ./path/to/data --output_dir adv_data/mifgsm/resnet50 --attack mifgsm --model=resnet50
python main.py --input_dir ./path/to/data --output_dir adv_data/mifgsm/resnet50 --eval
Attacks and Models
Untargeted Attacks
<table style="width:100%" border="1">
<thead>
<tr class="header">
<th><strong>Category</strong></th>
<th><strong>Attack </strong></th>
<th><strong>Main Idea</strong></th>
</tr>
</thead>
<tr>
<th rowspan="28"><sub><strong>Gradient-based</strong></sub></th>
<td><a href="https://arxiv.org/abs/1412.6572" target="_blank" rel="noopener noreferrer">FGSM (Goodfellow et al., 2015)</a></td>
<td ><sub>Add a small perturbation in the direction of gradient</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/1607.02533" target="_blank" rel="noopener noreferrer">I-FGSM (Kurakin et al., 2015)</a></td>
<td ><sub>Iterative version of FGSM</sub></td>
</tr>
<tr>
<td><a href="https://openaccess.thecvf.com/content_cvpr_2018/papers/Dong_Boosting_Adversarial_Attacks_CVPR_2018_paper.pdf" target="_blank" rel="noopener noreferrer">MI-FGSM (Dong et al., 2018)</a></td>
<td ><sub>Integrate the momentum term into the I-FGSM</sub></td>
</tr>
<tr>
<td><a href="https://openreview.net/pdf?id=SJlHwkBYDH" target="_blank" rel="noopener noreferrer">NI-FGSM (Lin et al., 2020)</a></td>
<td ><sub>Integrate the Nesterov's accelerated gradient into I-FGSM</sub></td>
</tr>
<tr>
<td><a href="https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123730307.pdf" target="_blank" rel="noopener noreferrer">PI-FGSM (Gao et al., 2020)</a></td>
<td ><sub>Reuse the cut noise and apply a heuristic project strategy to generate patch-wise noise</sub></td>
</tr>
<tr>
<td><a href="https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Enhancing_the_Transferability_of_Adversarial_Attacks_Through_Variance_Tuning_CVPR_2021_paper.pdf" target="_blank" rel="noopener noreferrer">VMI-FGSM (Wang et al., 2021)</a></td>
<td ><sub>Variance tuning MI-FGSM</sub></td>
</tr>
<tr>
<td><a href="https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Enhancing_the_Transferability_of_Adversarial_Attacks_Through_Variance_Tuning_CVPR_2021_paper.pdf" target="_blank" rel="noopener noreferrer">VNI-FGSM (Wang et al., 2021)</a></td>
<td ><sub>Variance tuning NI-FGSM</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2103.10609" target="_blank" rel="noopener noreferrer">EMI-FGSM (Wang et al., 2021)</a></td>
<td ><sub>Accumulate the gradients of several data points linearly sampled in the direction of previous gradient</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2007.03838" target="_blank" rel="noopener noreferrer">AI-FGTM (Zou et al., 2022)</a></td>
<td ><sub>Adopt Adam to adjust the step size and momentum using the tanh function</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2104.09722" target="_blank" rel="noopener noreferrer">I-FGS²M (Zhang et al., 2021)</a></td>
<td ><sub>Assigning staircase weights to each interval of the gradient</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2307.02828" target="_blank" rel="noopener noreferrer">SMI-FGRM (Han et al., 2023)</a></td>
<td ><sub> Substitute the sign function with data rescaling and use the depth first sampling technique to stabilize the update direction.</sub></td>
</tr>
<tr>
<td><a href="https://www.ijcai.org/proceedings/2022/0227.pdf" target="_blank" rel="noopener noreferrer">VA-I-FGSM (Zhang et al., 2022)</a></td>
<td ><sub>Adopt a larger step size and auxiliary gradients from other categories</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2210.05968" target="_blank" rel="noopener noreferrer">RAP (Qin et al., 2022)</a></td>
<td ><sub> Inject the worst-case perturbation when calculating the gradient.</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2306.01809" target="_blank" rel="noopener noreferrer">PC-I-FGSM (Wan et al., 2023)</a></td>
<td ><sub>Gradient Prediction-Correction on MI-FGSM</sub></td>
</tr>
<tr>
<td><a href="https://ieeexplore.ieee.org/document/10096558" target="_blank" rel="noopener noreferrer">IE-FGSM (Peng et al., 2023)</a></td>
<td ><sub> Integrate anticipatory data point to stabilize the update direction.</sub></td>
</tr>
<tr>
<td><a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Zhu_Boosting_Adversarial_Transferability_via_Gradient_Relevance_Attack_ICCV_2023_paper.pdf" target="_blank" rel="noopener noreferrer">GRA (Zhu et al., 2023)</a></td>
<td ><sub>Correct the gradient using the average gradient of several data points sampled in the neighborhood and adjust the update gradient with a decay indicator</sub></td>
</tr>
<tr>
<td><a href="https://ieeexplore.ieee.org/abstract/document/10223158" target="_blank" rel="noopener noreferrer">GNP (Wu et al., 2023)</a></td>
<td ><sub>Introduce a gradient norm penalty (GNP) term into the loss function</sub></td>
</tr>
<tr>
<td><a href="https://openaccess.thecvf.com/content/ICCV2023/papers/Ma_Transferable_Adversarial_Attack_for_Both_Vision_Transformers_and_Convolutional_Networks_ICCV_2023_paper.pdf" target="_blank" rel="noopener noreferrer">MIG (Ma et al., 2023)</a></td>
<td ><sub>Utilize integrated gradient to steer the generation of adversarial perturbations</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2303.15109" target="_blank" rel="noopener noreferrer">DTA (Yang et al., 2023)</a></td>
<td ><sub>Calculate the gradient on several examples using small stepsize</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2306.05225" target="_blank" rel="noopener noreferrer">PGN (Ge et al., 2023)</a></td>
<td ><sub>Penalizing gradient norm on the original loss function</sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2405.16181" target="_blank" rel="noopener noreferrer">MEF (Qiu et al., 2024)</a></td>
<td ><sub> Construct a max-min bi-level optimization problem aimed at finding flat adversarial regions</sub></td>
</tr>
<tr>
<td><a href="https://openaccess.thecvf.com/content/CVPR2024/papers/Fang_Strong_Transferable_Adversarial_Attacks_via_Ensembled_Asymptotically_Normal_Distribution_Learning_CVPR_2024_paper.pdf" target="_blank" rel="noopener noreferrer">ANDA (Fang et al., 2024)</a></td>
<td ><sub> Explicitly characterize adversarial perturbations from a learned distribution by taking advantage of the asymptotic normality property of stochastic gradient ascent. </sub></td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2211.11236" target="_blank" rel="noopener noreferrer">GI-FGSM (Wang et al., 2024)</a></td>
<td ><sub>Use global momentum initialization to better stablize update direction.</sub></td>
</tr>
<tr>
<td><a href="https://dl.acm.org/doi/10.1145/3627673.3679858" target="_blank" rel="noopener noreferrer">FGSRA (Wang et al., 2024)</a></td>
<td ><sub>Leverage frequency information and introduce similarity weights to assess neighborhood contribution.</sub></td>
</tr>
<tr>
<td><a href="https://ojs.aaai.org/index.php/AAAI/article/view/29323" target="_blank" rel="noopener noreferrer">AdaMSI-FGM (Long et al., 2024)</a></td>
<td ><sub>Guarantees convergence by incorporating an innovative, non-monotonic adaptive momentum parameter and replacing the problematic