TransferAttackEval
Revisiting Transferable Adversarial Images (TPAMI 2025)
Install / Use
/learn @ZhengyuZhao/TransferAttackEvalREADME
Revisiting Transferable Adversarial Images
Revisiting Transferable Adversarial Images: Systemization, Evaluation, and New Insights. Zhengyu Zhao*, Hanwei Zhang*, Renjue Li*, Ronan Sicre, Laurent Amsaleg, Michael Backes, Qi Li, Qian Wang, Chao Shen. TPAMI 2025.
We identify two main problems in common evaluation practices: <br>(1) for attack transferability, lack of systematic, one-to-one attack comparisons and fair hyperparameter settings; <br>(2) for attack stealthiness, simply no evaluations.
We address these problems by <br>(1) introducing a complete attack categorization and conducting systematic and fair intra-category analyses on transferability; <br>(2) considering diverse imperceptibility metrics and finer-grained stealthiness characteristics from the perspective of attack traceback.
We draw new insights, e.g., <br>(1) under a fair attack hyperparameter setting, one early attack method, DI, actually outperforms all the follow-up methods; <br>(2) popular diffusion-based defenses give a false sense of security since it is indeed largely bypassed by (black-box) transferable attacks; <br>(3) even when all attacks are bounded by the same Lp norm, they lead to dramatically different stealthiness performance, which negatively correlates with their transferability performance.
We provide the first large-scale evaluation of transferable adversarial examples on ImageNet, involving 23 representative attacks against 9 representative defenses.
We reveal that existing problematic evaluations have indeed caused misleading conclusions and missing points, and as a result, hindered the assessment of the actual progress in this field.
Evaluated Attacks and Defenses
<p align="center"> <img src="./attacks.png" width='550'> <img src="./defenses.png" width='450'> </p>Attack Categorization (Welcome more papers!)
<p align="center"> <img src="./transfer_pipeline.png" width='500'> </p>Gradient Stabilization Attacks [Code for 3 representative attacks]
- Boosting Adversarial Attacks with Momentum (CVPR 2018)
- Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks (ICLR 2020)
- Boosting Adversarial Transferability through Enhanced Momentum (BMVC 2021)
- Improving Adversarial Transferability with Spatial Momentum (arXiv 2022)
- Making Adversarial Examples More Transferable and Indistinguishable (AAAI2022)
- Boosting Adversarial Transferability by Achieving Flat Local Maxima (NeurIPS 2023)
- Transferable Adversarial Attack for Both Vision Transformers and Convolutional Networks via Momentum Integrated Gradients (ICCV 2023)
- Boosting Adversarial Transferability via Gradient Relevance Attack (ICCV 2023)
- Enhancing Transferable Adversarial Attacks on Vision Transformers through Gradient Normalization Scaling and High-Frequency Adaptation (ICLR 2024)
- Enhancing Adversarial Transferability Through Neighborhood Conditional Sampling (arxiv 2024)
- Improving Integrated Gradient-based Transferable Adversarial Examples by Refining the Integration Path (AAAI 2025)
Input Augmentation Attacks [Code for 5 representative attacks]
- Improving Transferability of Adversarial Examples with Input Diversity (CVPR 2019)
- Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks (CVPR 2019)
- Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks (ICLR 2020)
- Patch-wise Attack for Fooling Deep Neural Network (ECCV 2020)
- Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting (ECCV 2020)
- Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses (ECCV 2020)
- Enhancing the Transferability of Adversarial Attacks through Variance Tuning (CVPR 2021)
- Admix: Enhancing the Transferability of Adversarial Attacks (ICCV 2021)
- Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input (CVPR 2022)
- Frequency Domain Model Augmentation for Adversarial Attack (ECCV 2022)
- Adaptive Image Transformations for Transfer-based Adversarial Attack (ECCV 2022)
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)
- Enhancing the Self-Universality for Transferable Targeted Attacks (CVPR 2023)
- Improving the Transferability of Adversarial Samples by Path-Augmented Method (CVPR 2023)
- The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations (arXiv 2023)
- Structure Invariant Transformation for better Adversarial Transferability (ICCV 2023)
- Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping (AAAI 2024)
- Boosting Adversarial Transferability by Block Shuffle and Rotation (CVPR 2024)
- Learning to Transform Dynamically for Better Adversarial Transferability (CVPR 2024)
- Boosting the Transferability of Adversarial Examples via Local Mixup and Adaptive Step Size (arXiv 2024)
- Typography Leads Semantic Diversifying: Amplifying Adversarial Transferability across Multimodal Large Language Models (arXiv 2024)
- Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning (CVPR 2024)
- Everywhere Attack: Attacking Locally and Globally to Boost Targeted Transferability (AAAI 2025)
- Boosting Adversarial Transferability with Spatial Adversarial Alignment (arXiv 2025)
- S4ST: A Strong, Self-transferable, faSt, and Simple Scale Transformation for Transferable Targeted Attack (arXiv 2025)
Feature Disruption Attacks [Code for 5 representative attacks]
- Transferable Adversarial Perturbations (ECCV 2018)
- Task-generalizable Adversarial Attack based on Perceptual Metric (arXiv 2018)
- Feature Space Perturbations Yield More Transferable Adversarial Examples (CVPR 2019)
- FDA: Feature Disruptive Attack (ICCV 2019)
- Enhancing Adversarial Example Transferability with an Intermediate Level Attack (ICCV 2019)
- Transferable Perturbations of Deep Feature Distributions (ICLR 2020)
- Boosting the Transferability of Adversarial Samples via Attention (CVPR 2020)
- Towards Transferable Targeted Attack (CVPR 2020)
- Yet Another Intermediate-Level Attack (ECCV 2020)
- Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability (NeurIPS 2020)
- Feature Importance-aware Transferable Adversarial Attacks (ICCV 2021)
- Improving Adversarial Transferability via Neuron Attribution-Based Attacks (CVPR 2022)
- An Intermediate-level Attack Framework on The Basis of Linear Regression (TPAMI 2022)
- Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup (CVPR 2023)
- Diversifying the High-level Features for better Adversarial Transferability (BMVC 2023)
- [Improving Adversarial Transferability via Intermediate-level Perturbation Decay (NeurIPS 2023)](http
