SkillAgentSearch skills...

MVIMP

Mixed Video and Image Manipulation Program

Install / Use

/learn @CyFeng16/MVIMP

README

<p align="center"> <img alt="GitHub last commit" src="https://img.shields.io/github/last-commit/CyFeng16/MVIMP" /> <img alt="GitHub issues" src="https://img.shields.io/github/issues/CyFeng16/MVIMP" /> <img alt="GitHub License" src="https://img.shields.io/github/license/cyfeng16/MVIMP" /> <img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg" /> </p>

English | 简体中文 | Español

Welcome to MVIMP 👋

The name MVIMP (Mixed Video and Image Manipulation Program) was inspired by the name GIMP (GNU Image Manipulation Program), which hope it can help more people.

I realize that training a good-performance AI model is kind of just one side of the story, make it easy to use for others is the other thing. Thus, this repository built to embrace out-of-the-box AI ability to manipulate multimedia. Last but not least, wish you have fun!

| Model | Input | Output | Parallel | Colab Link | |:------------------------------------------------------:|:------:|:------:|:----------------------:|:---------------------------------------------------------------------------------------------------------------------:| | AnimeGAN | Images | Images | True | Open In Colab | | AnimeGANv2 | Images | Images | True | Open In Colab | | DAIN | Video | Video | False | Open In Colab | | DeOldify | Images | Images | True | Open In Colab | | Photo3D | Images | Videos | True(not recommmended) | Open In Colab | | Waifu2x | Images | Images | True | Open In Colab |

You are welcomed to discuss future features in this issue.

AnimeGANv2

Original repository: TachibanaYoshino/AnimeGANv2

The improved version of AnimeGAN, which converts landscape photos/videos(todo) to anime. The improvement directions of AnimeGANv2 mainly include the following 4 points:

  1. Solve the problem of high-frequency artifacts in the generated image.
  2. It is easy to train and directly achieve the effects in the paper.
  3. Further, reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.
  4. Use new high-quality style data, which come from BD movies as much as possible.

| Dependency | Version | |:------------:|:----------------------------------:| | TensorFLow | 1.15.2 | | CUDA Toolkit | 10.0(tested locally) / 10.1(colab) | | Python | 3.6.8(3.6+) |

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py
    # Step 2: Put your photos into ./Data/Input/
    # Step 3: Infernece
    python3 inference_animeganv2.py -s {The_Style_You_Choose}
    
  3. Description of Parameters

    | params | abbr. | Default | Description | |- |- |- |- | | --style | -s | Hayao | The anime style you want to get. |

    | Style name | Anime style | |- |- | | Hayao | Miyazaki Hayao | | Shinkai | Makoto Shinkai | | Paprika | Kon Satoshi |

AnimeGAN

Original repository: TachibanaYoshino/AnimeGAN

This is the Open source of the paper <AnimeGAN: a novel lightweight GAN for photo animation>, which uses the GAN framwork to transform real-world photos into anime images.

| Dependency | Version | |:------------:|:----------------------------------:| | TensorFLow | 1.15.2 | | CUDA Toolkit | 10.0(tested locally) / 10.1(colab) | | Python | 3.6.8(3.6+) |

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f animegan 
    # Step 2: Put your photos into ./Data/Input/
    # Step 3: Infernece
    python3 inference_animegan.py
    

DAIN

Original repository: baowenbo/DAIN

Depth-Aware video frame INterpolation (DAIN) model explicitly detect the occlusion by exploring the depth cue. We develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones.

This method achieves SOTA performance on the Middlebury dataset. Video are provided here.

The current version of DAIN (in this repo) can smoothly run 1080p video frame insertion even on GTX-1080 GPU card, as long as you turn -hr on (see Description of Parameters below).

| Dependency | Version | |:------------:|:-----------------------------------------------------:| | PyTroch | 1.0.0 | | CUDA Toolkit | 9.0(colab tested) | | Python | 3.6.8(3.6+) | | GCC | 4.9(Compiling PyTorch 1.0.0 extension files (.c/.cu)) |

P.S. Make sure your virtual env has torch-1.0.0 and torchvision-0.2.1 with CUDA-9.0 . ~~You can use the following command:~~ You can find out dependencies issue at #5 and #16 .

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f dain
    # Step 2: Put a single video file into ./Data/Input/
    # Step 3: Infernece
    python3 inference_dain.py -input your_input.mp4 -ts 0.5 -hr
    
  3. Description of Parameters

    | params | abbr. | Default | Description | |-------------------|--------|------------|---------------------------------------------------------------------------------------------------------------------------------------------| | --input_video | -input | / | The input video name. | | --time_step | -ts | 0.5 | Set the frame multiplier.<br>0.5 corresponds to 2X;<br>0.25 corresponds to 4X;<br>0.125 corresponds to 8X. | | --high_resolution | -hr | store_true | Default is False(action:store_true).<br>Turn it on when you handling FHD videos,<br>A frame-splitting process will reduce GPU memory usage. |

DeOldify

Original repository: jantic/DeOldify

DeOldify is a Deep Learning based project for colorizing and restoring old images and video!

We are now integrating the inference capabilities of the DeOldify model (both Artistic and Stable, no Video) with our MVIMP repository, and keeping the input and output interfaces consistent.

| Dependency | Version | |:------------:|:--------------------------:| | PyTroch | 1.5.0 | | CUDA Toolkit | 10.1(tested locally/colab) | | Python | 3.6.8(3.6+) |

Other Python dependencies listed in colab_requirements.txt, and will be auto installed while running preparation.py.

Usage:

  1. Colab

    You can open our jupyter notebook through colab link.

  2. Local

    # Step 1: Prepare
    git clone https://github.com/CyFeng16/MVIMP.git
    cd MVIMP
    python3 preparation.py -f deoldify
    # Step 2: Infernece
    python3 -W ignore inference_deoldify.py -art
    

Related Skills

View on GitHub
GitHub Stars71
CategoryContent
Updated1mo ago
Forks20

Languages

Python

Security Score

100/100

Audited on Mar 2, 2026

No findings