MetaShift
MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)
Install / Use
/learn @Weixin-Liang/MetaShiftREADME
MetaShift: A Dataset of Datasets for Evaluating Distribution Shifts and Training Conflicts
This repo provides the PyTorch source code of our paper: MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022). [PDF] [ICLR 2022 Video] [Slides] [HuggingFace]
Project website: https://MetaShift.readthedocs.io/
@InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
}
This repo provides the scripts for generating the proposed MetaShift, which offers a resource of 1000s of distribution shifts.
<!-- and the PyTorch source code for the experiments of evaluating distribution shifts and training conflicts. -->Abstract
Understanding the performance of machine learning model across diverse data distributions is critically important for reliable applications. Motivated by this, there is a growing focus on curating benchmark datasets that capture distribution shifts. While valuable, the existing benchmarks are limited in that many of them only contain a small number of shifts and they lack systematic annotation about what is different across different shifts. We present MetaShift---a collection of 12,868 sets of natural images across 410 classes---to address this challenge. We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. The key construction idea is to cluster images using its metadata, which provides context for each image (e.g. cats with cars or cats in bathroom) that represent distinct data distributions. MetaShift has two important benefits: first it contains orders of magnitude more natural data shifts than previously available. Second, it provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets. We demonstrate the utility of MetaShift in benchmarking several recent proposals for training models to be robust to data shifts. We find that the simple empirical risk minimization performs the best when shifts are moderate and no method had a systematic advantage for large shifts. We also show how MetaShift can help to visualize conflicts between data subsets during model training.
<p align='center'> <img width='100%' src='./docs/figures/MetaShift-Examples.jpg'/> <b>Figure 1: Example Cat vs. Dog Images from MetaShift. </b> For each class, MetaShift provides many subsets of data, each of which corresponds different contexts (the context is stated in parenthesis). </p> <p align='center'> <img width='100%' src='./docs/figures/MetaShift-InfoGraphic.jpg'/> <b>Figure 2: Infographics of MetaShift. </b> <p align='center'> <img width='100%' src='./docs/figures/Cat-MetaGraph.jpg'/> <b>Figure 3: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>Repo Structure Overview
.
├── README.md
├── dataset/
├── meta_data/
├── generate_full_MetaShift.py
├── ...
├── experiments/
├── subpopulation_shift/
├── main_generalization.py
├── ...
The dataset folder provides the script for generating MetaShift.
The experiments folder provides the expriments on MetaShift in the paper.
Dependencies
- Python 3.6.13 (e.g.
conda create -n venv python=3.6.13) - PyTorch Version: 1.4.0
- Torchvision Version: 0.5.0
Download Visual Genome
We leveraged the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Download the pre-processed and cleaned version of Visual Genome by GQA.
- Download image files (~20GB) and scene graph annotations:
wget -c https://nlp.stanford.edu/data/gqa/images.zip
unzip images.zip -d allImages
wget -c https://nlp.stanford.edu/data/gqa/sceneGraphs.zip
unzip sceneGraphs.zip -d sceneGraphs
- After this step, the base dataset file structure should look like this:
/data/GQA/
allImages/
images/
<ID>.jpg
sceneGraphs/
train_sceneGraphs.json
val_sceneGraphs.json
- Specify local path of Visual Genome
Extract the files, and then specify the folder path
(e.g.,
IMAGE_DATA_FOLDER=/data/GQA/allImages/images/) in Constants.py.
Generate the Full MetaShift Dataset (subsets defined by contextual objects)
Understanding dataset/meta_data/full-candidate-subsets.pkl
The metadata file dataset/meta_data/full-candidate-subsets.pkl is the most important piece of metadata of MetaShift, which provides the full subset information of MetaShift. To facilitate understanding, we have provided a notebook dataset/understanding_full-candidate-subsets-pkl.ipynb to show how to extract information from it.
Basically, the pickle file stores a collections.defaultdict(set) object, which contains 17,938 keys. Each key is a string of the subset name like dog(frisbee), and the corresponding value is a list of the IDs of the images that belong to this subset. The image IDs can be used to retrieve the image files from the Visual Genome dataset that you just downloaded. In our current version, 13,543 out of 17,938 subsets have more than 25 valid images. In addition, dataset/meta_data/full-candidate-subsets.pkl is drived from the scene graph annotation, so check it out if your project need additional information about each image.
Generate Full MetaShift
Since the total number of all subsets is very large, all of the following scripts only generate a subset of MetaShift. As specified in dataset/Constants.py, we only generate MetaShift for the following classes (subjects). You can add any additional classes (subjects) into the list. See dataset/meta_data/class_hierarchy.json for the full object vocabulary and its hierarchy.
SELECTED_CLASSES = [ 'cat', 'dog', 'bus', 'truck', 'elephant', 'horse', 'bowl', 'cup', ]
In addition, to save storage, all copied images are symbolic links. You can set use_symlink=True in the code to perform actual file copying. If you really want to generate the full MetaShift, then set ONLY_SELECTED_CLASSES = True in dataset/Constants.py.
cd dataset/
python generate_full_MetaShift.py
The following files will be generated by executing the script. Modify the global varaible SUBPOPULATION_SHIFT_DATASET_FOLDER to change the destination folder.
/data/MetaShift/MetaDataset-full
├── cat/
├── cat(keyboard)/
├── cat(sink)/
├── ...
├── dog/
├── dog(surfboard)
├── dog(boat)/
├── ...
├── bus/
├── ...
Beyond the generated MetaShift dataset, the scipt also genervates the meta-graphs for each class in dataset/meta-graphs.
.
├── README.md
├── dataset/
├── generate_full_MetaShift.py
├── meta-graphs/ (generated meta-graph visualization)
├── cat_graph.jpg
├── dog_graph.jpg
├── ...
├── ...
Bonus: Generate the MetaShift-Attributes Dataset (subsets defined by subject attributes)
<p align='center'> <img width='100%' src='./docs/figures/MetaShift-Attributes-Examples.jpg'/> <b>Figure: Example Subsets based on object attribute contexts. </b> the attribute is stated in parenthesis). MetaShift covers attributes including activity (e.g., sitting, jumping), color (e.g., orange, white), material (e.g., wooden, metallic), shape (e.g., round, square), and so on. </p>Understanding dataset/attributes_MetaShift/attributes-candidate-subsets.pkl
dataset/attributes_MetaShift/attributes-candidate-subsets.pkl stores the metadata for MetaShift-Attributes, where each subset is defined by the attribute of the subject, e.g. cat(orange), cat(white), dog(sitting), dog(jumping).
attributes-candidate-subsets.pkl has the same data format as full-candidate-subsets.pkl. To facilitate understanding, we have provided a notebook dataset/attributes_MetaShift/understanding_attributes-candidate-subsets-pkl.ipynb to show how to extract information from it.
Basically, the pickle file stores a collections.defaultdict(set) object, which contains 4,962 keys. Each key is a string of the subset name like
