SkillAgentSearch skills...

ProPainter

[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting

Install / Use

/learn @sczhou/ProPainter
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center"> <div class="logo"> <a href="https://shangchenzhou.com/projects/ProPainter/"> <img src="assets/propainter_logo1_glow.png" style="width: 180px"> </a> </div> <h1>ProPainter: Improving Propagation and Transformer for Video Inpainting</h1> <div> <a href='https://shangchenzhou.com/' target='_blank'>Shangchen Zhou</a>&emsp; <a href='https://li-chongyi.github.io/' target='_blank'>Chongyi Li</a>&emsp; <a href='https://ckkelvinchan.github.io/' target='_blank'>Kelvin C.K. Chan</a>&emsp; <a href='https://www.mmlab-ntu.com/person/ccloy/' target='_blank'>Chen Change Loy</a> </div> <div> S-Lab, Nanyang Technological University&emsp; </div> <div> <strong>ICCV 2023</strong> </div> <div> <h4 align="center"> <a href="https://shangchenzhou.com/projects/ProPainter" target='_blank'> <img src="https://img.shields.io/badge/🐳-Project%20Page-blue"> </a> <a href="https://arxiv.org/abs/2309.03897" target='_blank'> <img src="https://img.shields.io/badge/arXiv-2309.03897-b31b1b.svg"> </a> <a href="https://youtu.be/92EHfgCO5-Q" target='_blank'> <img src="https://img.shields.io/badge/Demo%20Video-%23FF0000.svg?logo=YouTube&logoColor=white"> </a> <a href="https://huggingface.co/spaces/sczhou/ProPainter" target='_blank'> <img src="https://img.shields.io/badge/Demo-%F0%9F%A4%97%20Hugging%20Face-blue"> </a> <a href="https://openxlab.org.cn/apps/detail/ShangchenZhou/ProPainter" target='_blank'> <img src="https://img.shields.io/badge/Demo-%F0%9F%91%A8%E2%80%8D%F0%9F%8E%A8%20OpenXLab-blue"> </a> <img src="https://api.infinitescript.com/badgen/count?name=sczhou/ProPainter"> </h4> </div>

⭐ If ProPainter is helpful to your projects, please help star this repo. Thanks! 🤗

:open_book: For more visual results, go checkout our <a href="https://shangchenzhou.com/projects/ProPainter/" target="_blank">project page</a>


</div>

Update

  • 2023.11.09: Integrated to :man_artist: OpenXLab. Try out online demo! OpenXLab
  • 2023.11.09: Integrated to :hugs: Hugging Face. Try out online demo! Hugging Face
  • 2023.09.24: We remove the watermark removal demos officially to prevent the misuse of our work for unethical purposes.
  • 2023.09.21: Add features for memory-efficient inference. Check our GPU memory requirements. 🚀
  • 2023.09.07: Our code and model are publicly available. 🐳
  • 2023.09.01: This repo is created.

TODO

  • [ ] Make a Colab demo.
  • [x] ~~Make a interactive Gradio demo.~~
  • [x] ~~Update features for memory-efficient inference.~~

Results

👨🏻‍🎨 Object Removal

<table> <tr> <td> <img src="assets/object_removal1.gif"> </td> <td> <img src="assets/object_removal2.gif"> </td> </tr> </table>

🎨 Video Completion

<table> <tr> <td> <img src="assets/video_completion1.gif"> </td> <td> <img src="assets/video_completion2.gif"> </td> </tr> <tr> <td> <img src="assets/video_completion3.gif"> </td> <td> <img src="assets/video_completion4.gif"> </td> </tr> </table>

Overview

overall_structure

Dependencies and Installation

  1. Clone Repo

    git clone https://github.com/sczhou/ProPainter.git
    
  2. Create Conda Environment and Install Dependencies

    # create new anaconda env
    conda create -n propainter python=3.8 -y
    conda activate propainter
    
    # install python dependencies
    pip3 install -r requirements.txt
    
    • CUDA >= 9.2
    • PyTorch >= 1.7.1
    • Torchvision >= 0.8.2
    • Other required packages in requirements.txt

Get Started

Prepare pretrained models

Download our pretrained models from Releases V0.1.0 to the weights folder. (All pretrained models can also be automatically downloaded during the first inference.)

The directory structure will be arranged as:

weights
   |- ProPainter.pth
   |- recurrent_flow_completion.pth
   |- raft-things.pth
   |- i3d_rgb_imagenet.pt (for evaluating VFID metric)
   |- README.md

🏂 Quick test

We provide some examples in the inputs folder. Run the following commands to try it out:

# The first example (object removal)
python inference_propainter.py --video inputs/object_removal/bmx-trees --mask inputs/object_removal/bmx-trees_mask 
# The second example (video completion)
python inference_propainter.py --video inputs/video_completion/running_car.mp4 --mask inputs/video_completion/mask_square.png --height 240 --width 432

The results will be saved in the results folder. To test your own videos, please prepare the input mp4 video (or split frames) and frame-wise mask(s).

If you want to specify the video resolution for processing or avoid running out of memory, you can set the video size of --width and --height:

# process a 576x320 video; set --fp16 to use fp16 (half precision) during inference.
python inference_propainter.py --video inputs/video_completion/running_car.mp4 --mask inputs/video_completion/mask_square.png --height 320 --width 576 --fp16

💃🏻 Interactive Demo

We also provide an interactive demo for object removal, allowing users to select any object they wish to remove from a video. You can try the demo on Hugging Face or run it locally.

<div align="center"> <img src="./web-demos/hugging_face/assets/demo.gif" alt="Demo GIF" style="max-width: 512px; height: auto;"> </div>

Please note that the demo's interface and usage may differ from the GIF animation above. For detailed instructions, refer to the user guide.

🚀 Memory-efficient inference

Video inpainting typically requires a significant amount of GPU memory. Here, we offer various features that facilitate memory-efficient inference, effectively avoiding the Out-Of-Memory (OOM) error. You can use the following options to reduce memory usage further:

  • Reduce the number of local neighbors through decreasing the --neighbor_length (default 10).
  • Reduce the number of global references by increasing the --ref_stride (default 10).
  • Set the --resize_ratio (default 1.0) to resize the processing video.
  • Set a smaller video size via specifying the --width and --height.
  • Set --fp16 to use fp16 (half precision) during inference.
  • Reduce the frames of sub-videos --subvideo_length (default 80), which effectively decouples GPU memory costs and video length.

Blow shows the estimated GPU memory requirements for different sub-video lengths with fp32/fp16 precision:

| Resolution | 50 frames | 80 frames | | :--- | :----: | :----: | | 1280 x 720 | 28G / 19G | OOM / 25G | | 720 x 480 | 11G / 7G | 13G / 8G | | 640 x 480 | 10G / 6G | 12G / 7G | | 320 x 240 | 3G / 2G | 4G / 3G |

Dataset preparation

<table> <thead> <tr> <th>Dataset</th> <th>YouTube-VOS</th> <th>DAVIS</th> </tr> </thead> <tbody> <tr> <td>Description</td> <td>For training (3,471) and evaluation (508)</td> <td>For evaluation (50 in 90)</td> <tr> <td>Images</td> <td> [<a href="https://competitions.codalab.org/competitions/19544#participate-get-data">Official Link</a>] (Download train and test all frames) </td> <td> [<a href="https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-trainval-480p.zip">Official Link</a>] (2017, 480p, TrainVal) </td> </tr> <tr> <td>Masks</td> <td colspan="2"> [<a href="https://drive.google.com/file/d/1dFTneS_zaJAHjglxU10gYzr1-xALgHa4/view?usp=sharing">Google Drive</a>] [<a href="https://pan.baidu.com/s/1JC-UKmlQfjhVtD81196cxA?pwd=87e3">Baidu Disk</a>] (For reproducing paper results; provided in <a href="https://arxiv.org/abs/2309.03897">ProPainter</a> paper) </td> </tr> </tbody> </table>

The training and test split files are provided in datasets/<dataset_name>. For each dataset, you should place JPEGImages to datasets/<dataset_name>. Resize all video frames to size 432x240 for training. Unzip downloaded mask files to datasets.

The datasets directory structure will be arranged as: (Note: please check it carefully)

datasets
   |- davis
      |- JPEGImages_432_240
         |- <video_name>
            |- 00000.jpg
            |- 00001.jpg
      |- test_masks
         |- <video_name>
            |- 00000.png
            |- 00001.png   
      |- train.json
      |- test.json
   |- youtube-vos
      |- JPEGImages_432_240
         |- <video_name>
            |- 00000.jpg
            |- 00001.jpg
      |- test_masks
         |- <video_name>
            |- 00000.png
            |- 00001.png
      |- train.json
      |- test.json   

Training

Our training configures are provided in train_flowcomp.json (for Recurrent Flow Completion Network) and train_propainter.json (for ProPainter).

Run one of the following commands for training:

 # For training Recurrent Flow Completion Network
 python train.py -c configs/train_flowcomp.json
 # For training ProPainter
 python train.py -c configs/train_propainter.json

You can run the same command to resume your training.

To speed up the training process, you ca

Related Skills

docs-writer

99.0k

`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie

model-usage

335.4k

Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.

pr

for a github pr, please respond in the following format - ## What type of PR is this? - [ ] 🍕 Feature - [ ] 🐛 Bug Fix - [ ] 📝 Documentation - [ ] 🧑‍💻 Code Refactor - [ ] 🔧 Other ## Description <!-- What changed and why? Optional: include screenshots or other supporting artifacts. --> ## Related Issues <!-- Link issues like: Fixes #123 --> ## Updated requirements or dependencies? - [ ] Requirements or dependencies added/updated/removed - [ ] No requirements changed ## Testing - [ ] Tests added/updated - [ ] No tests needed **How to test or why no tests:** <!-- Describe test steps or explain why tests aren't needed --> ## Checklist - [ ] Self-reviewed the code - [ ] Tests pass locally - [ ] No console errors/warnings ## [optional] What gif best describes this PR?

arscontexta

2.9k

Claude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.

View on GitHub
GitHub Stars6.6k
CategoryContent
Updated2h ago
Forks776

Languages

Python

Security Score

85/100

Audited on Mar 25, 2026

No findings