SCOPS
SCOPS: Self-Supervised Co-Part Segmentation (CVPR'19)
Install / Use
/learn @NVlabs/SCOPSREADME
SCOPS: Self-Supervised Co-Part Segmentation (CVPR 2019)
PyTorch implementation for self-supervised co-part segmentation.

License
Copyright (C) 2019 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
Paper
Installation
The code is developed based on Pytorch v0.4 with TensorboardX as visualization tools. We recommend to use virtualenv to run our code:
$ virtualenv -p python3 scops_env
$ source scops_env/bin/activate
(scops_env)$ pip install -r requirements.txt
To deactivate the virtual environment, run $ deactivate. To activate the environment again, run $ source scops_env/bin/activate.
SCOPS on Unaligned CelebA
Download data (Saliency, labels, pretrained model)
$ ./download_CelebA.sh
Download CelebA unaligned from here.
Test the pretrained model
$ ./evaluate_celebAWild.sh and accept all default options. The results are stored in a single webpage at results_CelebA/SCOPS_K8/ITER_100000/web_html/index.html.
Train the model
$ CUDA_VISIBLE_DEVICES={GPU} python train.py -f exps/SCOPS_K8_retrain.json where {GPU} is the GPU device number.
SCOPS on Caltech-UCSD Birds
Test the pretrained model
Note: The model is trained with two main differences in the master branch: 1) it is trained with ground truth silhouettes rather than saliency maps. 2) it crops birds w.r.t bounding boxes rather than using the original image.
First set image and annotation path in line 35 and line 37 in dataset/cub.py. Then run:
sh eval_cub.sh
Results as well as visualizations could be found in the results/cub/ITER_60000/train/ folder.
Citation
Please consider citing our paper if you find this code useful for your research.
@inproceedings{hung:CVPR:2019,
title = {SCOPS: Self-Supervised Co-Part Segmentation},
author = {Hung, Wei-Chih and Jampani, Varun and Liu, Sifei and Molchanov, Pavlo and Yang, Ming-Hsuan and Kautz, Jan},
booktitle = {IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
month = june,
year = {2019}
}
Related Skills
node-connect
345.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
106.4kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
345.9kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
345.9kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
