Vid2vid
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Install / Use
/learn @NVIDIA/Vid2vidREADME
<br><br><br><br>
vid2vid
Project | YouTube(short) | YouTube(full) | arXiv | Paper(full)
Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic video-to-video translation. It can be used for turning semantic label maps into photo-realistic videos, synthesizing people talking from edge maps, or generating human motions from poses. The core of video-to-video translation is image-to-image translation. Some of our work in that space can be found in pix2pixHD and SPADE. <br><br>
Video-to-Video Synthesis
Ting-Chun Wang<sup>1</sup>, Ming-Yu Liu<sup>1</sup>, Jun-Yan Zhu<sup>2</sup>, Guilin Liu<sup>1</sup>, Andrew Tao<sup>1</sup>, Jan Kautz<sup>1</sup>, Bryan Catanzaro<sup>1</sup>
<sup>1</sup>NVIDIA Corporation, <sup>2</sup>MIT CSAIL
In Neural Information Processing Systems (NeurIPS) 2018
Video-to-Video Translation
- Label-to-Streetview Results
- Edge-to-Face Results
- Pose-to-Body Results
- Frame Prediction Results
Prerequisites
- Linux or macOS
- Python 3
- NVIDIA GPU + CUDA cuDNN
- PyTorch 0.4
Getting Started
Installation
- Install python libraries dominate and requests.
pip install dominate requests
- If you plan to train with face datasets, please install dlib.
pip install dlib
git clone https://github.com/NVIDIA/vid2vid
cd vid2vid
- Docker Image
If you have difficulty building the repo, a docker image can be found in the
dockerfolder.
Testing
-
Please first download example dataset by running
python scripts/download_datasets.py. -
Next, compile a snapshot of FlowNet2 by running
python scripts/download_flownet2.py. -
Cityscapes
-
Please download the pre-trained Cityscapes model by:
python scripts/street/download_models.py -
To test the model (
bash ./scripts/street/test_2048.sh):#!./scripts/street/test_2048.sh python test.py --name label2city_2048 --label_nc 35 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_GThe test results will be saved in:
./results/label2city_2048/test_latest/. -
We also provide a smaller model trained with single GPU, which produces slightly worse performance at 1024 x 512 resolution.
- Please download the model by
python scripts/street/download_models_g1.py- To test the model (
bash ./scripts/street/test_g1_1024.sh):
#!./scripts/street/test_g1_1024.sh python test.py --name label2city_1024_g1 --label_nc 35 --loadSize 1024 --n_scales_spatial 3 --use_instance --fg --n_downsample_G 2 --use_single_G -
You can find more example scripts in the
scripts/street/directory.
-
-
Faces
- Please download the pre-trained model by:
python scripts/face/download_models.py - To test the model (
bash ./scripts/face/test_512.sh):
The test results will be saved in:#!./scripts/face/test_512.sh python test.py --name edge2face_512 --dataroot datasets/face/ --dataset_mode face --input_nc 15 --loadSize 512 --use_single_G./results/edge2face_512/test_latest/.
- Please download the pre-trained model by:
Dataset
- Cityscapes
- We use the Cityscapes dataset as an example. To train a model on the full dataset, please download it from the official website (registration required).
- We apply a pre-trained segmentation algorithm to get the corresponding semantic maps (train_A) and instance maps (train_inst).
- Please add the obtained images to the
datasetsfolder in the same way the example images are provided.
- Face
- We use the FaceForensics dataset. We then use landmark detection to estimate the face keypoints, and interpolate them to get face edges.
- Pose
- We use random dancing videos found on YouTube. We then apply DensePose / OpenPose to estimate the poses for each frame.
Training with Cityscapes dataset
- First, download the FlowNet2 checkpoint file by running
python scripts/download_models_flownet2.py. - Training with 8 GPUs:
- We adopt a coarse-to-fine approach, sequentially increasing the resolution from 512 x 256, 1024 x 512, to 2048 x 1024.
- Train a model at 512 x 256 resolution (
bash ./scripts/street/train_512.sh)
#!./scripts/street/train_512.sh python train.py --name label2city_512 --label_nc 35 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 6 --n_frames_total 6 --use_instance --fg- Train a model at 1024 x 512 resolution (must train 512 x 256 first) (
bash ./scripts/street/train_1024.sh):
#!./scripts/street/train_1024.sh python train.py --name label2city_1024 --label_nc 35 --loadSize 1024 --n_scales_spatial 2 --num_D 3 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 4 --use_instance --fg --niter_step 2 --niter_fix_global 10 --load_pretrain checkpoints/label2city_512
If you have TensorFlow installed, you can see TensorBoard logs in ./checkpoints/label2city_1024/logs by adding --tf_log to the training scripts.
-
Training with a single GPU:
- We trained our models using multiple GPUs. For convenience, we provide some sample training scripts (train_g1_XXX.sh) for single GPU users, up to 1024 x 512 resolution. Again a coarse-to-fine approach is adopted (256 x 128, 512 x 256, 1024 x 512). Performance is not guaranteed using these scripts.
- For example, to train a 256 x 128 video with a single GPU (
bash ./scripts/street/train_g1_256.sh)
#!./scripts/street/train_g1_256.sh python train.py --name label2city_256_g1 --label_nc 35 --loadSize 256 --use_instance --fg --n_downsample_G 2 --num_D 1 --max_frames_per_gpu 6 --n_frames_total 6 -
Training at full (2k x 1k) resolution
- To train the images at full resolution (2048 x 1024) requires 8 GPUs with at least 24G memory (
bash ./scripts/street/train_2048.sh). If only GPUs with 12G/16G memory are available, please use the script./scripts/street/train_2048_crop.sh, which will crop the images during training. Performance is not guaranteed with this script.
- To train the images at full resolution (2048 x 1024) requires 8 GPUs with at least 24G memory (
Training with face datasets
- If you haven't, please first download example dataset by running
python scripts/download_datasets.py. - Run the following command to compute face landmarks for training dataset:
python data/face_landmark_detection.py train - Run the example script (
bash ./scripts/face/train_512.sh)python train.py --name edge2face_512 --dataroot datasets/face/ --dataset_mode face --input_nc 15 --loadSize 512 --num_D 3 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 6 --n_frames_total 12 - For single GPU users, example scripts are in train_g1_XXX.sh. These scripts are not fully tested and please use at your own discretion. If you still hit out of memory errors, try reducing
max_frames_per_gpu. - More examples scripts can be found in
scripts/face/. - Please refer to More Training/Test Details for more explanations about training flags.
Training with pose datasets
- If you haven't, please first download example dataset by running
python scripts/download_datasets.py. - Example DensePose and OpenPose results are included. If you plan to use your own dataset, please generate these results and put them in the same way the example dataset is provided.
- Run the example script (
bash ./scripts/pose/train_256p.sh)python train.py --name pose2body_256p --dataroot datasets/pose --dataset_mode pose --input_nc 6 --num_D 2 --resize_or_crop ScaleHeight_and_scaledCrop --loadSize 384 --fineSize 256 --gpu_ids 0,1,2,3,4,5,6,7 --batchSize 8 --max_frames_per_gpu 3 --no_first_img --n_frames_total 12 --max_t_step 4 - Again, for single GPU users, example scripts are in train_g1_XXX.sh. These scripts are not fully tested and please use at your own discretion. If you still hit out of memory errors, try reducing
max_frames_per_gpu. - More examples scripts can be found in
scripts/pose/. - Please refer to More Training/Test Details for more explanations about training flags.
Training with your own dataset
- If your input is a label map, please generate label maps which are one-channel whose pixel values correspond to the object labels (i.e. 0,1,...,N-1, where N is the number of labels). This is because we need to generate one-hot vectors from the label maps. Please use
--label_nc Nduring both training and testing. - If your input is not a label map, please specify `--input_nc
