EfficientTrain
1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.
Install / Use
/learn @LeapLabTHU/EfficientTrainREADME
EfficientTrain++ (TPAMI 2024 & ICCV 2023)
This repo releases the code and pre-trained models of EfficientTrain++, an off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.
[TPAMI 2024]
EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training
Yulin Wang, Yang Yue, Rui Lu, Yizeng Han, Shiji Song, and Gao Huang
Tsinghua University, BAAI
[arXiv]
[ICCV 2023]
EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones
Yulin Wang, Yang Yue, Rui Lu, Tianjiao Liu, Zhao Zhong, Shiji Song, and Gao Huang
Tsinghua University, Huawei, BAAI
[arXiv]
- Update on 2024.05.14: I'm highly interested in extending EfficientTrain++ to CLIP-style models, multi-modal large language models, generative models (e.g., diffusion-based or token-based), and advanced visual self-supervised learning methods. I'm always open to discussions and potential collaborations. If you are interested, please kindly send an e-mail to me (wang-yl19@mails.tsinghua.edu.cn).
Overview
We present a novel curriculum learning approach for the efficient training of foundation visual backbones. Our algorithm, EfficientTrain++, is simple, general, yet surprisingly effective. As an off-the-shelf approach, it reduces the training time of various popular models (e.g., ResNet, ConvNeXt, DeiT, PVT, Swin, CSWin, and CAFormer) by 1.5−3.0× on ImageNet-1K/22K without sacrificing accuracy. It also demonstrates efficacy in self-supervised learning (e.g., MAE).
<p align="center"> <img src="./imgs/overview.png" width= "450"> </p>Highlights of our work
- 1.5−3.0× lossless training or pre-training speedup on ImageNet-1K and ImageNet-22K. Practical efficiency aligns with theoretical performance. Both upstream and downstream performance are not affected.
- Effective for diverse visual backbones, including ConvNets, isotropic/multi-stage ViTs, and ConvNet-ViT hybrid models. For example, ResNet, ConvNeXt, DeiT, PVT, Swin, CSWin, and CAFormer.
- Dramatically improving the performance of relatively smaller models (e.g., on ImageNet-1K, DeiT-S: 80.3% -> 81.3%, DeiT-T: 72.5% -> 74.4%).
- Superior performance across varying training budgets (e.g., training cost of 0 - 300 epochs or more).
- Applicable to both supervised learning and self-supervised learning (e.g., MAE).
- Optional techniques optimized for limited CPU/memory capabilities (e.g., cannot support high data pre-processing speed).
- Optional techniques optimized for large-scale parallel training (e.g., 16-64 GPUs or more).
Catalog
- [x] ImageNet-1K Training Code
- [x] ImageNet-1K Pre-trained Models
- [x] ImageNet-22K -> ImageNet-1K Fine-tuning Code
- [x] ImageNet-22K Pre-trained Models
- [x] ImageNet-22K -> ImageNet-1K Fine-tuned Models
Installation
We support PyTorch>=2.0.0 and torchvision>=0.15.1. Please install them following the official instructions.
Clone this repo and install the required packages:
git clone https://github.com/LeapLabTHU/EfficientTrain
pip install timm==0.4.12 tensorboardX six
The instructions for preparing ImageNet-1K/22K datasets can be found here.
Training
See TRAINING.md for the training instructions.
Pre-trained models & evaluation & fine-tuning
See EVAL.md for the pre-trained models and the instructions for evaluating or fine-tuning them.
Results
Supervised learning on ImageNet-1K
<p align="center"> <img src="./imgs/in_1k.png" width= "900"> </p>ImageNet-22K pre-training
<p align="center"> <img src="./imgs/in_22k.png" width= "900"> </p>Supervised learning on ImageNet-1K (varying training budgets)
<p align="center"> <img src="./imgs/vary_epoch.png" width= "900"> </p> <p align="center"> <img src="./imgs/300ep.png" width= "450"> </p>Object detection and instance segmentation on COCO
<p align="center"> <img src="./imgs/coco.png" width= "450"> </p>Semantic segmentation on ADE20K
<p align="center"> <img src="./imgs/seg.png" width= "450"> </p>Self-supervised learning results on top of MAE
<p align="center"> <img src="./imgs/mae.png" width= "450"> </p>TODO
This repo is still being updated. If you need anything, no matter it is listed in the following or not, please send an e-mail to me (wang-yl19@mails.tsinghua.edu.cn).
- [ ] A detailed tutorial on how to implement this repo to train (customized) models on customized datasets.
- [ ] ImageNet-22K Training Code
- [ ] ImageNet-1K Self-supervised Learning Code (EfficientTrain + MAE)
- [ ] EfficientTrain + MAE Pre-trained Models
Acknowledgments
This repo is mainly developed on the top of ConvNeXt, we sincerely thank them for their efficient and neat codebase. This repo is also built using DeiT and timm.
Citation
If you find this work valuable or use our code in your own research, please consider citing us:
@article{wang2024EfficientTrain_pp,
title = {EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training},
author = {Wang, Yulin and Yue, Yang and Lu, Rui and Han, Yizeng and Song, Shiji and Huang, Gao},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year = {2024},
doi = {10.1109/TPAMI.2024.3401036}
}
@inproceedings{wang2023EfficientTrain,
title = {EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones},
author = {Wang, Yulin and Yue, Yang and Lu, Rui and Liu, Tianjiao and Zhong, Zhao and Song, Shiji and Huang, Gao},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2023}
}
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
groundhog
399Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
18.8kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
