Scatsimclr
Official implementation of paper "ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets", accepted at ICCV 2021 2nd Visual Inductive Priors for Data-Efficient Deep Learning Workshop
Install / Use
/learn @vkinakh/ScatsimclrREADME
ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets
This repo contains official Pytorch implementation of the paper:
ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets accepted at ICCV 2021 workshop 2nd Visual Inductive Priors for Data-Efficient Deep Learning Workshop </br> Vitaliy Kinakh, Olga Taran, Sviatoslav Voloshynovskiy
Paper <!--[Presentation](INSERT LINK HERE) [Video](INSERT LINK HERE)-->
- 🏆 SOTA on CIFAR20 Unsupervised Image Classification. Check out Papers With Code
Contents
Introduction
In this paper, we consider a problem of self-supervised learning for small-scale datasets based on contrastive loss. Such factors as the complexity of training requiring complex architectures, the needed number of views produced by data augmentation, and their impact on the classification accuracy are understudied problems. We consider an architecture of contrastive loss system such as SimCLR, where baseline model is replaced by geometrically invariant “hand-crafted” network ScatNet with small trainable adapter network and argue that the number of parameters of the whole system and the number of views can be considerably reduced while practically preserving the same classification accuracy. </br> In addition, we investigate the impact of regularization strategies using pretext task learning based on an estimation of parameters of augmentation transform such as rotation and jigsaw permutation for both traditional baseline models and ScatNet based models. </br> Finally, we demonstrate that the proposed architecture with pretext task learning regularization achieves the state-of-the-art classification performance with a smaller number of trainable parameters and with reduced number of views.
We outperform state-of-the-art methods, in particular +8.9% on CIFAR20, and INSERT HERE on STL10 in terms of classification accuracy.
Installation
Conda installation
conda env create -f env.yml
Training
To run training without pretext task, fill the config file. Example of detailed config file for training without
pretext task is config.yaml.
Then run
python main.py --mode unsupervised --config <path to config file>
To run training with pretext task, fill config file. Example of detailed config file for training with pretext task
is config_pretext.yaml.
Then run
python main.py --mode pretext --config <path to config file>
Evaluation
To run evaluation fill the config file the same way as config for training without pretext task: config.path.
Put path to the model in fine_tune_from.
Then run
python evaluate.py --config <path to config file>
Results
|Dataset | Top-1 accuracy| Model |Image size| J | L | Download link | |--------|---------------|--------------|----------|---|----|---| |STL10 | 85.11% | ScatSimCLR30 |(96, 96) | 2 | 16 | Download| |CIFAR20 | 63.86% | ScatSimCLR30 |(32, 32) | 2 | 16 | Download |
Citation
@inproceedings{
kinakh2021scatsimclr,
title={ScatSim{CLR}: self-supervised contrastive learning with pretext task regularization for small-scale datasets},
author={Vitaliy Kinakh and Slava Voloshynovskiy and Olga Taran},
booktitle={2nd Visual Inductive Priors for Data-Efficient Deep Learning Workshop},
year={2021},
url={https://openreview.net/forum?id=IQ87KPOWyg1}
}
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
research_rules
Research & Verification Rules Quote Verification Protocol Primary Task "Make sure that the quote is relevant to the chapter and so you we want to make sure that we want to have it identifie
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
