SkillAgentSearch skills...

GFPGAN

GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.

Install / Use

/learn @TencentARC/GFPGAN
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="assets/gfpgan_logo.png" height=130> </p>

<div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>

<div align="center"> <!-- <a href="https://twitter.com/_Xintao_" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/17445847/187162058-c764ced6-952f-404b-ac85-ba95cce18e7b.png" width="4%" alt="" /> </a> -->

download PyPI Open issue Closed issue LICENSE python lint Publish-pip

</div>
  1. :boom: Updated online demo: Replicate. Here is the backup.
  2. :boom: Updated online demo: Huggingface Gradio
  3. Colab Demo for GFPGAN <a href="https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>; (Another Colab Demo for the original paper model)
<!-- 3. Online demo: [Replicate.ai](https://replicate.com/xinntao/gfpgan) (may need to sign in, return the whole image) 4. Online demo: [Baseten.co](https://app.baseten.co/applications/Q04Lz0d/operator_views/8qZG6Bg) (backed by GPU, returns the whole image) 5. We provide a *clean* version of GFPGAN, which can run without CUDA extensions. So that it can run in **Windows** or on **CPU mode**. -->

:rocket: Thanks for your interest in our work. You may also want to check our new updates on the tiny models for anime images and videos in Real-ESRGAN :blush:

GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration.<br> It leverages rich and diverse priors encapsulated in a pretrained face GAN (e.g., StyleGAN2) for blind face restoration.

:question: Frequently Asked Questions can be found in FAQ.md.

:triangular_flag_on_post: Updates

  • :white_check_mark: Add RestoreFormer inference codes.
  • :white_check_mark: Add V1.4 model, which produces slightly more details and better identity than V1.3.
  • :white_check_mark: Add V1.3 model, which produces more natural restoration results, and better results on very low-quality / high-quality inputs. See more in Model zoo, Comparisons.md
  • :white_check_mark: Integrated to Huggingface Spaces with Gradio. See Gradio Web Demo.
  • :white_check_mark: Support enhancing non-face regions (background) with Real-ESRGAN.
  • :white_check_mark: We provide a clean version of GFPGAN, which does not require CUDA extensions.
  • :white_check_mark: We provide an updated model without colorizing faces.

If GFPGAN is helpful in your photos/projects, please help to :star: this repo or recommend it to your friends. Thanks:blush: Other recommended projects:<br> :arrow_forward: Real-ESRGAN: A practical algorithm for general image restoration<br> :arrow_forward: BasicSR: An open-source image and video restoration toolbox<br> :arrow_forward: facexlib: A collection that provides useful face-relation functions<br> :arrow_forward: HandyView: A PyQt5-based image viewer that is handy for view and comparison<br>


:book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior

[Paper]   [Project Page]   [Demo] <br> Xintao Wang, Yu Li, Honglun Zhang, Ying Shan <br> Applied Research Center (ARC), Tencent PCG

<p align="center"> <img src="https://xinntao.github.io/projects/GFPGAN_src/gfpgan_teaser.jpg"> </p>

:wrench: Dependencies and Installation

Installation

We now provide a clean version of GFPGAN, which does not require customized CUDA extensions. <br> If you want to use the original model in our paper, please see PaperModel.md for installation.

  1. Clone repo

    git clone https://github.com/TencentARC/GFPGAN.git
    cd GFPGAN
    
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    
    # Install facexlib - https://github.com/xinntao/facexlib
    # We use face detection and face restoration helper in the facexlib package
    pip install facexlib
    
    pip install -r requirements.txt
    python setup.py develop
    
    # If you want to enhance the background (non-face) regions with Real-ESRGAN,
    # you also need to install the realesrgan package
    pip install realesrgan
    

:zap: Quick Inference

We take the v1.3 version for an example. More models can be found here.

Download pre-trained models: GFPGANv1.3.pth

wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models

Inference!

python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2
Usage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...

  -h                   show this help
  -i input             Input image or folder. Default: inputs/whole_imgs
  -o output            Output folder. Default: results
  -v version           GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3
  -s upscale           The final upsampling scale of the image. Default: 2
  -bg_upsampler        background upsampler. Default: realesrgan
  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400
  -suffix              Suffix of the restored faces
  -only_center_face    Only restore the center face
  -aligned             Input are aligned faces
  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto

If you want to use the original model in our paper, please see PaperModel.md for installation and inference.

:european_castle: Model Zoo

| Version | Model Name | Description | | :---: | :---: | :---: | | V1.3 | GFPGANv1.3.pth | Based on V1.2; more natural restoration results; better results on very low-quality / high-quality inputs. | | V1.2 | GFPGANCleanv1-NoCE-C2.pth | No colorization; no CUDA extensions are required. Trained with more data with pre-processing. | | V1 | GFPGANv1.pth | The paper model, with colorization. |

The comparisons are in Comparisons.md.

Note that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.

| Version | Strengths | Weaknesses | | :---: | :---: | :---: | |V1.3 | ✓ natural outputs<br> ✓better results on very low-quality inputs <br> ✓ work on relatively high-quality inputs <br>✓ can have repeated (twice) restorations | ✗ not very sharp <br> ✗ have a slight change on identity | |V1.2 | ✓ sharper output <br> ✓ with beauty makeup | ✗ some outputs are unnatural |

You can find more models (such as the discriminators) here: [Google Drive], OR [Tencent Cloud 腾讯微云]

:computer: Training

We provide the training codes for GFPGAN (used in our paper). <br> You could improve it according to your own needs.

Tips

  1. More high qu
View on GitHub
GitHub Stars37.4k
CategoryEducation
Updated2h ago
Forks6.3k

Languages

Python

Security Score

85/100

Audited on Mar 24, 2026

No findings