Openmixup
CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark
Install / Use
/learn @Westlake-AI/OpenmixupREADME
OpenMixup
<!-- [](https://github.com/Westlake-AI/openmixup/issues) -->📘Documentation | 🛠️Installation | 🚀Model Zoo | 👀Awesome Mixup | 🔍Awesome MIM | 🆕News
Introduction
The main branch works with PyTorch 1.8 (required by some self-supervised methods) or higher (we recommend PyTorch 1.12). You can still use PyTorch 1.6 for supervised classification methods.
OpenMixup is an open-source toolbox for supervised, self-, and semi-supervised visual representation learning with mixup based on PyTorch, especially for mixup-related methods. Recently, OpenMixup is on updating to adopt new features and code structures of OpenMMLab 2.0 (#42).
-
Modular Design. OpenMixup follows a similar code architecture of OpenMMLab projects, which decompose the framework into various components, and users can easily build a customized model by combining different modules. OpenMixup is also transplantable to OpenMMLab projects (e.g., MMPreTrain).
-
All in One. OpenMixup provides popular backbones, mixup methods, semi-supervised, and self-supervised algorithms. Users can perform image classification (CNN & Transformer) and self-supervised pre-training (contrastive and autoregressive) under the same framework.
-
Standard Benchmarks. OpenMixup supports standard benchmarks of image classification, mixup classification, self-supervised evaluation, and provides smooth evaluation on downstream tasks with open-source projects (e.g., object detection and segmentation on Detectron2 and MMSegmentation).
-
State-of-the-art Methods. Openmixup provides awesome lists of popular mixup and self-supervised methods. OpenMixup is updating to support more state-of-the-art image classification and self-supervised methods.
News and Updates
[2025-03-19] OpenMixup v0.2.10 is released, supporting PyTorch >= 2.0 and more mixup augmentations and networks.
Installation
OpenMixup is compatible with Python 3.6/3.7/3.8/3.9 and PyTorch >= 1.6. Here are quick installations for installation in the development mode:
conda create -n openmixup python=3.8 pytorch=1.12 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
pip install openmim
mim install mmcv-full
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py develop
<details>
<summary>Installation with PyTorch 2.x requiring different processes.</summary>
conda create -n openmixup python=3.9
conda activate openmixup
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
pip install https://download.openmmlab.com/mmcv/dist/cu118/torch2.1.0/mmcv_full-1.7.2-cp39-cp39-manylinux1_x86_64.whl
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
pip install -r requirements/runtime.txt
python setup.py develop
</details>
Fore more detailed installation and dataset preparation, please refer to install.md.
Getting Started
OpenMixup supports Linux and macOS. It enables easy implementation and extensions of mixup data augmentation methods in existing supervised, self-, and semi-supervised visual recognition models. Please see get_started.md for the basic usage of OpenMixup.
Training and Evaluation Scripts
Here, we provide scripts for starting a quick end-to-end training with multiple GPUs and the specified CONFIG_FILE.
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [optional arguments]
For example, you can run the script below to train a ResNet-50 classifier on ImageNet with 4 GPUs:
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash tools/dist_train.sh configs/classification/imagenet/resnet/resnet50_4xb64_cos_ep100.py 4
After training, you can test the trained models with the corresponding evaluation script:
bash tools/dist_test.sh ${CONFIG_FILE} ${GPUS} ${PATH_TO_MODEL} [optional arguments]
Development
Please see Tutorials for more developing examples and tech details:
Downetream Tasks for Self-supervised Learning
Useful Tools
<p align="right">(<a href="#top">back to top</a>)</p>Overview of Model Zoo
Please run experiments or find results on each config page. Refer to Mixup Benchmarks for benchmarking results of mixup methods. View Model Zoos Sup and Model Zoos SSL for a comprehensive collection of mainstream backbones and self-supervised algorithms. We also provide the paper lists of Awesome Mixups and Awesome MIM for your reference. Please view config files and links to models at the following config pages. Checkpoints and training logs are on updating!
<table align="center"> <tbody> <tr align="center" valign="bottom"> <td> <b>Supported Backbone Architectures</b> </td> <td> <b>Mixup Data Augmentations</b> </td> </tr> <tr valign="top"> <td> <ul> <li><a href="https://dl.acm.org/doi/10.1145/3065386">AlexNet</a> (NeurIPS'2012) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/alexnet/">config</a></li> <li><a href="https://arxiv.org/abs/1409.1556">VGG</a> (ICLR'2015) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/vgg/">config</a></li> <li><a href="https://arxiv.org/abs/1512.00567">InceptionV3</a> (CVPR'2016) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/inception_v3/">config</a></li> <li><a href="https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html">ResNet</a> (CVPR'2016) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/">config</a></li> <li><a href="https://arxiv.org/abs/1611.05431">ResNeXt</a> (CVPR'2017) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/">config</a></li> <li><a href="https://arxiv.org/abs/1709.01507">SE-ResNet</a> (CVPR'2018) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/">config</a></li> <li><a href="https://arxiv.org/abs/1709.01507">SE-ResNeXt</a> (CVPR'2018) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/">config</a></li> <li><a href="https://arxiv.org/abs/1807.11164">ShuffleNetV1</a> (CVPR'2018) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/shufflenet_v1/">config</a></li> <li><a href="https://arxiv.org/abs/1807.11164">ShuffleNetV2</a> (ECCV'2018) <a href="https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/shufflenet_v2/">config</a></li> <li><a hRelated Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
mentoring-juniors
Community-contributed instructions, agents, skills, and configurations to help you make the most of GitHub Copilot.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
