Deepcluster
Deep Clustering for Unsupervised Learning of Visual Features
Install / Use
/learn @facebookresearch/DeepclusterREADME
Deep Clustering for Unsupervised Learning of Visual Features
News
We release paper and code for SwAV, our new self-supervised method. SwAV pushes self-supervised learning to only 1.2% away from supervised learning on ImageNet with a ResNet-50! It combines online clustering with a multi-crop data augmentation.
We also present DeepCluster-v2, which is an improved version of DeepCluster (ResNet-50, better data augmentation, cosine learning rate schedule, MLP projection head, use of centroids, ...). Check out DeepCluster-v2 code.
DeepCluster
This code implements the unsupervised training of convolutional neural networks, or convnets, as described in the paper Deep Clustering for Unsupervised Learning of Visual Features.
Moreover, we provide the evaluation protocol codes we used in the paper:
- Pascal VOC classification
- Linear classification on activations
- Instance-level image retrieval
Finally, this code also includes a visualisation module that allows to assess visually the quality of the learned features.
Requirements
- a Python installation version 2.7
- the SciPy and scikit-learn packages
- a PyTorch install version 0.1.8 (pytorch.org)
- CUDA 8.0
- a Faiss install (Faiss)
- The ImageNet dataset (which can be automatically downloaded by recent version of torchvision)
Pre-trained models
We provide pre-trained models with AlexNet and VGG-16 architectures, available for download.
- The models in Caffe format expect BGR inputs that range in [0, 255]. You do not need to subtract the per-color-channel mean image since the preprocessing of the data is already included in our released models.
- The models in PyTorch format expect RGB inputs that range in [0, 1]. You should preprocessed your data before passing them to the released models by normalizing them:
mean_rgb = [0.485, 0.456, 0.406];std_rgb = [0.229, 0.224, 0.225]Note that in all our released models, sobel filters are computed within the models as two convolutional layers (greyscale + sobel filters).
You can download all variants by running
$ ./download_model.sh
This will fetch the models into ${HOME}/deepcluster_models by default.
You can change that path in the environment variable.
Direct download links are provided here:
- AlexNet-PyTorch
- AlexNet-prototxt + AlexNet-caffemodel
- VGG16-PyTorch
- VGG16-prototxt + VGG16-caffemodel
We also provide the last epoch cluster assignments for these models. After downloading, open the file with Python 2:
import pickle
with open("./alexnet_cluster_assignment.pickle", "rb") as f:
b = pickle.load(f)
If you're a Python 3 user, specify encoding='latin1' in the load fonction.
Each file is a list of (image path, cluster_index) tuples.
Finally, we release the features extracted with DeepCluster model for ImageNet dataset. These features are in dimension 4096 and correspond to a forward on the model up to the penultimate convolutional layer (just before last ReLU). In you plan to cluster the features, don't forget to normalize and reduce/whiten them.
Running the unsupervised training
Unsupervised training can be launched by running:
$ ./main.sh
Please provide the path to the data folder:
DIR=/datasets01/imagenet_full_size/061417/train
To train an AlexNet network, specify ARCH=alexnet whereas to train a VGG-16 convnet use ARCH=vgg16.
You can also specify where you want to save the clustering logs and checkpoints using:
EXP=exp
During training, models are saved every other n iterations (set using the --checkpoints flag), and can be found in for instance in ${EXP}/checkpoints/checkpoint_0.pth.tar.
A log of the assignments in the clusters at each epoch can be found in the pickle file ${EXP}/clusters.
Full documentation of the unsupervised training code main.py:
usage: main.py [-h] [--arch ARCH] [--sobel] [--clustering {Kmeans,PIC}]
[--nmb_cluster NMB_CLUSTER] [--lr LR] [--wd WD]
[--reassign REASSIGN] [--workers WORKERS] [--epochs EPOCHS]
[--start_epoch START_EPOCH] [--batch BATCH]
[--momentum MOMENTUM] [--resume PATH]
[--checkpoints CHECKPOINTS] [--seed SEED] [--exp EXP]
[--verbose]
DIR
PyTorch Implementation of DeepCluster
positional arguments:
DIR path to dataset
optional arguments:
-h, --help show this help message and exit
--arch ARCH, -a ARCH CNN architecture (default: alexnet)
--sobel Sobel filtering
--clustering {Kmeans,PIC}
clustering algorithm (default: Kmeans)
--nmb_cluster NMB_CLUSTER, --k NMB_CLUSTER
number of cluster for k-means (default: 10000)
--lr LR learning rate (default: 0.05)
--wd WD weight decay pow (default: -5)
--reassign REASSIGN how many epochs of training between two consecutive
reassignments of clusters (default: 1)
--workers WORKERS number of data loading workers (default: 4)
--epochs EPOCHS number of total epochs to run (default: 200)
--start_epoch START_EPOCH
manual epoch number (useful on restarts) (default: 0)
--batch BATCH mini-batch size (default: 256)
--momentum MOMENTUM momentum (default: 0.9)
--resume PATH path to checkpoint (default: None)
--checkpoints CHECKPOINTS
how many iterations between two checkpoints (default:
25000)
--seed SEED random seed (default: 31)
--exp EXP path to exp folder
--verbose chatty
Evaluation protocols
Pascal VOC
To run the classification task with fine-tuning launch:
./eval_voc_classif_all.sh
and with no finetuning:
./eval_voc_classif_fc6_8.sh
Both these scripts download this code.
You need to download the VOC 2007 dataset. Then, specify in both ./eval_voc_classif_all.sh and ./eval_voc_classif_fc6_8.sh scripts the path CAFFE to point to the caffe branch, and VOC to point to the Pascal VOC directory.
Indicate in PROTO and MODEL respectively the path to the prototxt file of the model and the path to the model weights of the model to evaluate.
The flag --train-from allows to indicate the separation between the frozen and to-train layers.
We implemented voc classification with PyTorch.
Erratum: When training the MLP only (fc6-8), the parameters of scaling of the batch-norm layers in the whole network are trained. With freezing these parameters we get 70.4 mAP.
Linear classification on activations
You can run these transfer tasks using:
$ ./eval_linear.sh
You need to specify the path to the supervised data (ImageNet or Places):
DATA=/datasets01/imagenet_full_size/061417/
the path of your model:
MODEL=/private/home/mathilde/deepcluster/checkpoint.pth.tar
and on top of which convolutional layer to train the classifier:
CONV=3
You can specify where you want to save the output of this experiment (checkpoints and best models) with
EXP=exp
Full documentation for this task:
usage: eval_linear.py [-h] [--data DATA] [--model MODEL] [--conv {1,2,3,4,5}]
[--tencrops] [--exp EXP] [--workers WORKERS]
[--epochs EPOCHS] [--batch_size BATCH_SIZE] [--lr LR]
[--momentum MOMENTUM] [--weight_decay WEIGHT_DECAY]
[--seed SEED] [--verbose]
Train linear classifier on top of frozen convolutional layers of an AlexNet.
optional arguments:
-h, --help show this help message and exit
--data DATA path to dataset
--model MODEL path to model
--conv {1,2,3,4,5} on top of which convolutional layer train logistic
regression
--tencrops validation accuracy averaged over 10 crops
--exp EXP exp folder
--workers WORKERS number of data loading workers (default: 4)
--epochs EPOCHS number of total epochs to run (default: 90)
--batch_size BATCH_SIZE
mini-batch size (default: 256)
--lr LR learning rate
--momentum MOMENTUM momentum (default: 0.9)
--weight_decay WEIGHT_DECAY, --wd WEIGHT_DECAY
weight decay pow (default: -4)
--seed SEED random seed
--verbose chatty
Instance-level image retrieval
You can run the instance-level image retrieval transfer task using:
./eval_retrieval.sh
Visualisation
We provide two standard visualisation methods presented in our paper.
Related Skills
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
last30days-skill
15.9kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
autoresearch
2.8kClaude Autoresearch Skill — Autonomous goal-directed iteration for Claude Code. Inspired by Karpathy's autoresearch. Modify → Verify → Keep/Discard → Repeat forever.
omg-learn
Learning from user corrections by creating skills and patterns. Patterns can prevent mistakes (block/warn/ask) or inject helpful context into prompts
