Easyrobust
EasyRobust: an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch.
Install / Use
/learn @alibaba/EasyrobustREADME
EasyRobust
<div align="center"> </div>What's New
-
[Apr 2024] Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging was accepted by T-IFS 2024! Codes will be avaliable at examples/imageclassification/cifar10/adversarial_training/fgsm_law
-
[Jul 2023] COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts was accepted by ICCV 2023! Dataset will be avaliable at benchmarks/coco_o
-
[Jul 2023] Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training was accepted by INTERSPEECH 2023! Codes will be avaliable at examples/asr/WAPAT
-
[Feb 2023] ImageNet-E: Benchmarking Neural Network Robustness against Attribute Editing was accepted by CVPR 2023! Codes will be avaliable at benchmarks/imagenet-e
-
[Feb 2023] TransAudio: Towards the Transferable Adversarial Audio Attack via Learning Contextualized Perturbations was accepted by ICASSP 2023! Codes will be avaliable at examples/attacks/transaudio
-
[Jan 2023] Inequality phenomenon in $l_\infty$-adversarial training, and its unrealized threats was accepted by ICLR 2023 as notable-top-25%! Codes will be avaliable at examples/attacks/inequality
-
[Oct 2022]: Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective was accepted by TIP 2022! Codes will be avaliable at examples/attacks/dra
-
[Sep 2022]: Boosting Out-of-distribution Detection with Typical Features was accepted by NeurIPS 2022! Codes avaliable at examples/ood_detection/BATS
-
[Sep 2022]: Enhance the Visual Representation via Discrete Adversarial Training was accepted by NeurIPS 2022! Codes avaliable at examples/imageclassification/imagenet/dat
-
[Sep 2022]: Updating 5 methods for analysing your robust models under tools/.
-
[Sep 2022]: Updating 13 reproducing examples of robust training methods under examples/imageclassification/imagenet.
-
[Sep 2022]: Releasing 16 Adversarial Training models, including a Swin-B which achieves SOTA adversairal robustness with 47.42% on AutoAttack!
-
[Sep 2022]: EasyRobust v0.2.0 released.
Our Research Project
- [T-IFS 2024] Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging [Paper, Code]
- [ICCV 2023] COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts [Paper, COCO-O dataset]
- [INTERSPEECH 2023] Robust Automatic Speech Recognition via WavAugment Guided Phoneme Adversarial Training [Paper, Code]
- [CVPR 2023] ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing [Paper, Image editing toolkit, ImageNet-E dataset]
- [ICLR 2023] Inequality phenomenon in $l_\infty$-adversarial training, and its unrealized threats [Paper, Code]
- [ICASSP 2023] TransAudio: Towards the Transferable Adversarial Audio Attack via Learning Contextualized Perturbations [Paper, Code]
- [TIP 2022] Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective [Paper, Code]
- [NeurIPS 2022] Boosting Out-of-distribution Detection with Typical Features [Paper, Code]
- [NeurIPS 2022] Enhance the Visual Representation via Discrete Adversarial Training [Paper, Code]
- [CVPR 2022] Towards Robust Vision Transformer [Paper, Code]
Introduction
EasyRobust is an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch. EasyRobust aims to accelerate research cycle in robust vision, by collecting comprehensive robust training techniques and benchmarking them with various robustness metrics. The key features includes:
-
Reproducible implementation of SOTA in Robust Image Classification: Most existing SOTA in Robust Image Classification are implemented - Adversarial Training, AdvProp, SIN, AugMix, DeepAugment, DrViT, RVT, FAN, APR, HAT, PRIME, DAT and so on.
-
Benchmark suite: Variety of benchmarks tasks including ImageNet-A, ImageNet-R, ImageNet-Sketch, ImageNet-C, ImageNetV2, Stylized-ImageNet, ObjectNet.
-
Scalability: You can use EasyRobust to conduct 1-gpu training, multi-gpu training on single machine and large-scale multi-node training.
-
Model Zoo: Open source more than 30 pretrained adversarially or non-adversarially robust models.
-
Analytical tools: Support analysis and visualization about a pretrained robust model, including Attention Visualization, Decision Boundary Visualization, Convolution Kernel Visualization, Shape vs. Texture Biases Analysis, etc. Using these tools can help us to explain how robust training improves the interpretability of the model.
Technical Articles
We have a series of technical articles on the functionalities of EasyRobust.
- NeurIPS2022 阿里浙大提出利用更典型的特征来提升分布外检测性能
- 顶刊TIP 2022!阿里提出:从分布视角出发理解和提升对抗样本的迁移性
- 无惧对抗和扰动、增强泛化,阿里安全打造更鲁棒的ViT模型,论文入选CVPR 2022
- NeurIPS2022 阿里提出基于离散化对抗训练的鲁棒视觉新基准
Installation
Install from Source:
$ git clone https://github.com/alibaba/easyrobust.git
$ cd easyrobust
$ pip install -e .
Install from PyPI:
$ pip install easyrobust
download the ImageNet dataset and place into /path/to/imagenet. Specify $ImageNetDataDir as ImageNet path by:
$ export ImageNetDataDir=/path/to/imagenet
[Optional]: If you use EasyRobust to evaluate the model robustness, download the benchmark dataset by:
$ sh download_data.sh
[Optional]: If you use analysis tools in tools/, install extra requirements by:
$ pip install -r requirements/optional.txt
Docker
We have provided a runnable environment in docker/Dockerfile for users who do not want to install by pip. To use it, please confirm that docker and nvidia-docker have installed. Then run the following command:
docker build -t alibaba/easyrobust:v1 -f docker/Dockerfile .
Getting Started
EasyRobust focuses on the basic usages of: (1) Evaluate and benchmark the robustness of a pretrained models and (2) Train your own robust models or reproduce the results of previous SOTA methods.
1. How to evaluate and benchmark the robustness of given models?
It only requires a few lines to evaluate the robustness of a model using EasyRobust. We give a minimalist example in benchmarks/resnet50_example.py:
#############################################################
# Define your model
#############################################################
model = torchvision.models.resnet50(pretrained=True)
model = model.eval()
if torch.cuda.is_available(): model = model.cuda()
#############################################################
# Start Evaluation
#############################################################
# ood
evaluate_imagenet_val(model, 'benchmarks/data/imagenet-val')
evaluate_imagenet_a(model, 'benc
