SkillAgentSearch skills...

TransferAttackEval

Revisiting Transferable Adversarial Images (TPAMI 2025)

Install / Use

/learn @ZhengyuZhao/TransferAttackEval
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Revisiting Transferable Adversarial Images

Revisiting Transferable Adversarial Images: Systemization, Evaluation, and New Insights. Zhengyu Zhao*, Hanwei Zhang*, Renjue Li*, Ronan Sicre, Laurent Amsaleg, Michael Backes, Qi Li, Qian Wang, Chao Shen. TPAMI 2025.

We identify two main problems in common evaluation practices: <br>(1) for attack transferability, lack of systematic, one-to-one attack comparisons and fair hyperparameter settings; <br>(2) for attack stealthiness, simply no evaluations.

We address these problems by <br>(1) introducing a complete attack categorization and conducting systematic and fair intra-category analyses on transferability; <br>(2) considering diverse imperceptibility metrics and finer-grained stealthiness characteristics from the perspective of attack traceback.

We draw new insights, e.g., <br>(1) under a fair attack hyperparameter setting, one early attack method, DI, actually outperforms all the follow-up methods; <br>(2) popular diffusion-based defenses give a false sense of security since it is indeed largely bypassed by (black-box) transferable attacks; <br>(3) even when all attacks are bounded by the same Lp norm, they lead to dramatically different stealthiness performance, which negatively correlates with their transferability performance.

We provide the first large-scale evaluation of transferable adversarial examples on ImageNet, involving 23 representative attacks against 9 representative defenses.

We reveal that existing problematic evaluations have indeed caused misleading conclusions and missing points, and as a result, hindered the assessment of the actual progress in this field.

Evaluated Attacks and Defenses

<p align="center"> <img src="./attacks.png" width='550'> <img src="./defenses.png" width='450'> </p>

Attack Categorization (Welcome more papers!)

<p align="center"> <img src="./transfer_pipeline.png" width='500'> </p>

Gradient Stabilization Attacks [Code for 3 representative attacks]

Input Augmentation Attacks [Code for 5 representative attacks]

Feature Disruption Attacks [Code for 5 representative attacks]

View on GitHub
GitHub Stars141
CategoryDevelopment
Updated23d ago
Forks11

Languages

Python

Security Score

85/100

Audited on Mar 4, 2026

No findings