Ttab
[ICML 2023] On Pitfalls of Test-Time Adaptation
Install / Use
/learn @LINs-lab/TtabREADME
Test-Time Adaptation Benchmark (TTAB)
This repository is the official implementation of <br> On Pitfalls of Test-time Adaptation, ICML, 2023 <br> <a href="https://marcelluszhao.github.io/">Hao Zhao*</a>, <a href="https://sites.google.com/view/yuejiangliu">Yuejiang Liu*</a>, <a href="https://people.epfl.ch/alexandre.alahi/?lang=en">Alexandre Alahi</a>, <a href="https://tlin-taolin.github.io">Tao Lin</a>
TL;DR: We introduce a test-time adaptation benchmark that systematically examines a large array of recent methods under diverse conditions. Our results reveal three common pitfalls in prior efforts.
- Model selection is exceedingly difficult for test-time adaptation due to online batch dependency.
- The effectiveness of TTA methods varies greatly depending on the quality and properties of pre-trained models.
- Even with oracle-based tuning, no existing methods can yet address all common classes of distribution shifts.
Overview
The TTAB package contains:
- Data loaders that automatically handle data processing and splitting to cover multiple significant evaluation settings considered in prior work.
- Unified dataset evaluators that standardize model evaluation for each dataset and setting.
- Multiple representative Test-time Adaptation (TTA) algorithms.
In addition, the example scripts contain default models, optimizers, and evaluation code. New algorithms can be easily added and run on all of the TTAB datasets.
News
- August 2023: We released a new benchmark dataset
Yearbookwith temporal shift. Similar to Wild-Time, we use yearbook portraits (i.e., 14156 in-distribution photos in a random order) taken from 1930-1969 to pre-train a model (with a self-supervision auxiliary task) and use the other portraits (i.e., 19275 out-of-distribution photos arranged in the order of years) from 1970-2013 to test, which results in 98.8% in-distribution accuracy (98.0% reported in Wild-Time) and 82.4% out-of-distribution accuracy (79.5% reported in Wild-Time). - August 2023: We released a collection of experimental setups to help you reproduce our paper results. Check more details in issue #4.
- August 2023: We released an improved pretraining script based on what we used in our project, which can cover all of benchmark datasets mentioned in our paper except ImageNet.
Available algorithms
The currently available algorithms are:
- Batch Normalization Test-time Adaptation (BN_Adapt, Schneider et al., 2020)
- Source Hypothesis Transfer (SHOT, Liang et al., 2020)
- Test-time Training (TTT, Sun et al., 2020)
- Test-time Entropy Minimization (TENT, Wang et al., 2021)
- Test-time Template Adjuster (T3A, Iwasawa & Matsuo, 2021)
- Marginal Entropy Minimization (MEMO, Zhang et al., 2022)
- Non-i.i.d.Test-time Adaptation (NOTE, Gong et al., 2022)
- Continual Test-time Adaptation (CoTTA, Wang et al., 2022)
- Conjugate Pseudo-Labels (Conjugate PL, Goyal et al., 2022)
- Efficient Anti-forgetting Test-time Adaptation (EATA, Niu et al., 2022)
- Sharpness-aware Entropy Minimization (SAR, Niu et al., 2023)
Send us a PR to add your algorithm! Our implementations use ResNets (He et al., 2015) and ViTs (Dosovitskiy et al., 2020) pretrained by ERM or self-supervised rotation prediction task (Gidaris et al., 2018).
Available datasets
The currently available datasets are:
- CIFAR10 (Krizhevsky et al., 2009)
- CIFAR10-C & ImageNet-C (Hendrycks & Dietterich, 2019)
- CIFAR10.1 (Recht et al., 2018)
- ImageNet (Deng et al., 2009)
- ImageNet-V2 (Recht et al., 2019)
- OfficeHome (Venkateswara et al., 2017)
- PACS (Li et al., 2017)
- ColoredMNIST (Arjovsky et al., 2019)
- Waterbirds (Sagawa et al., 2019)
- Yearbook (Ginosar et al., 2015)
Send us a PR to add your dataset! Any custom image dataset with folder structure dataset/domain/class/image.xyz is readily usable.
Installation
To run a baseline test, please prepare the relevant pre-trained checkpoints for the base model and place them in pretrain/ckpt/.
Requirements
The TTAB package depends on the following requirements:
- numpy>=1.21.5
- pandas>=1.1.5
- pillow>=9.0.1
- pytz>=2021.3
- torch>=1.7.1
- torchvision>=0.8.2
- timm>=0.6.11
- scikit-learn>=1.0.3
- scipy>=1.7.3
- tqdm>=4.56.2
- tyro>=0.5.5
Datasets
Distribution shift occurs when the test distribution differs from the training distribution, and it can considerably degrade performance of machine learning models deployed in the real world. The form of distribution shifts differs greatly across varying applications in practice. In TTAB, we collect 10 datasets and systematically sort them into 5 types of distribution shifts:
- Covariate Shift
- Natural Shift
- Domain Generalization
- Label Shift
- Spurious Correlation Shift

Using the TTAB package
The TTAB package provides a simple, standardized interface for all TTA algorithms and datasets in the benchmark. This short Python snippet covers all of the steps of getting started with a user-customizable configuration, including the choice of TTA algorithms, datasets, base models, model selection methods, experimental setups, evaluation scenarios (we will discuss evaluation scenarios in more detail in Scenario) and protocols.
config, scenario = configs_utils.config_hparams(config=init_config)
# Dataset
test_data_cls = define_dataset.ConstructTestDataset(config=config)
test_loader = test_data_cls.construct_test_loader(scenario=scenario)
# Base model.
model = define_model(config=config)
load_pretrained_model(config=config, model=model)
# Algorithms.
model_adaptation_cls = get_model_adaptation_method(
adaptation_name=scenario.model_adaptation_method
)(meta_conf=config, model=model)
model_selection_cls = get_model_selection_method(selection_name=scenario.model_selection_method)(
meta_conf=config, model=model
)
# Evaluate.
benchmark = Benchmark(
scenario=scenario,
model_adaptation_cls=model_adaptation_cls,
model_selection_cls=model_selection_cls,
test_loader=test_loader,
meta_conf=config,
)
benchmark.eval()
Data loading
For evaluation, the TTAB package provides two types of dataset objects. The standard dataset object stores data, labels and indices as well as several APIs to support high-level manipulation, such as mixing the source and target domains. The standard dataset object serves common evaluation metrics like Top-1 accuracy and cross-entropy.
To support other metrics, such as worst-group accuracy, for more robust evaluation, we provide a group-wise dataset object that records additional group information.
To provide a more seamless user experience, we have designed a unified data loader that supports all dataset objects. To load data in TTAB, simply run the following command with config and scenario as inputs.
test_data_cls = define_dataset.ConstructTestDataset(config=config)
test_loader = test_data_cls.construct_test_loader(scenario=scenario)
Scenario
In the scenario section, we outline all relevant parameters for defining a distribution shift problem in practice, such as test_domain and test_case. In the test_domain, we specify the implicit $\mathcal{P}(a^{1:K})$ and selected sampling strategy. test_case determines how we organize the existing dataset corresponding to test_domain into a data stream that will be fed to TTA methods. Besides, we also define the model architecture, TTA method, and model selection method that we will use for the defined distribution shift problem.
Here, we present an example of scenario. Please feel free to suggest a new scenario for your research.
"S1": Scenario(
task="classification",
model_name="resnet26",
model_adaptation
Related Skills
node-connect
343.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
92.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
343.3kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
343.3kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
