SkillAgentSearch skills...

MSGAN

MSGAN: Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis (CVPR2019)

Install / Use

/learn @HelenMao/MSGAN
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<img src='imgs/teaser.jpg' width="900px">

Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis

Pytorch implementation for our MSGAN (Miss-GAN). We propose a simple yet effective mode seeking regularization term that can be applied to arbitrary conditional generative adversarial networks in different tasks to alleviate the mode collapse issue and improve the diversity.

Contact: Qi Mao (qimao@pku.edu.cn), Hsin-Ying Lee (hlee246@ucmerced.edu), and Hung-Yu Tseng (htseng6@ucmerced.edu)

Paper

Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis<br> Qi Mao*, Hsin-Ying Lee*, Hung-Yu Tseng*, Siwei Ma, and Ming-Hsuan Yang<br> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 (* equal contribution)<br> [arxiv]

Citing MSGAN

If you find MSGAN useful in your research, please consider citing:

@inproceedings{MSGAN,
  author = {Mao, Qi and Lee, Hsin-Ying and Tseng, Hung-Yu and Ma, Siwei and Yang, Ming-Hsuan},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  title = {Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis},
  year = {2019}
}

Example Results

<img src='imgs/DRIT.jpg' width="900px">

Usage

Prerequisites

  • Python 3.5 or Python 3.6
  • Pytorch 0.4.0 and torchvision (https://pytorch.org/)
  • TensorboardX
  • Tensorflow (for tensorboard usage)

Install

  • Clone this repo:
git clone https://github.com/HelenMao/MSGAN.git

Training Examples

Download datasets for each task into the dataset folder

mkdir datasets

Conditoned on Label

cd MSGAN/DCGAN-Mode-Seeking
python train.py --dataroot ./datasets/Cifar10

Conditioned on Image

  • Paired Data: facades and maps
  • Baseline: Pix2Pix <br>

You can download the facades and maps datasets from the BicycleGAN [Github Project]. <br> We employ the network architecture of the BicycleGAN and follow the training process of Pix2Pix.

cd MSGAN/Pix2Pix-Mode-Seeking
python train.py --dataroot ./datasets/facades
  • Unpaired Data: Yosemite (summer <-> winter) and Cat2Dog (cat <-> dog)
  • Baseline: DRIT <br>

You can download the datasets from the DRIT [Github Project]. <br> Specify --concat 0 for Cat2Dog to handle large shape variation translation

cd MSGAN/DRIT-Mode-Seeking
python train.py --dataroot ./datasets/cat2dog

Conditioned on Text

  • Dataset: CUB-200-2011
  • Baseline: StackGAN++ <br>

You can download the datasets from the StackGAN++ [Github Project].

cd MSGAN/StackGAN++-Mode-Seeking
python main.py --cfg cfg/birds_3stages.yml

Pre-trained Models

Download and save them into

./models/

Evaluation

For Pix2Pix, DRIT, and StackGAN++, please follow the instructions of corresponding github projects of the baseline frameworks for more evaluation details. <br>

Testing Examples

DCGAN-Mode-Seeking <br>

python test.py --dataroot ./datasets/Cifar10 --resume ./models/DCGAN-Mode-Seeking/00199.pth

Pix2Pix-Mode-Seeking <br>

python test.py --dataroot ./datasets/facades --checkpoints_dir ./models/Pix2Pix-Mode-Seeking/facades --epoch 400
python test.py --dataroot ./datasets/maps --checkpoints_dir ./models/Pix2Pix-Mode-Seeking/maps --epoch 400

DRIT-Mode-Seeking <br>

python test.py --dataroot ./datasets/yosemite --resume ./models/DRIT-Mode-Seeking/yosemite/01200.pth --concat 1
python test.py --dataroot ./datasets/cat2dog --resume ./models/DRIT-Mode-Seeking/cat2dog/01999.pth --concat 0

StackGAN++-Mode-Seeking <br>

python main.py --cfg cfg/eval_birds.yml 

Reference

Quantitative Evaluation Metrics

Related Skills

View on GitHub
GitHub Stars413
CategoryDevelopment
Updated1mo ago
Forks65

Languages

Python

Security Score

85/100

Audited on Feb 11, 2026

No findings