SkillAgentSearch skills...

DDRNet

[AAAI2024] Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera.

Install / Use

/learn @ChengxuLiu/DDRNet
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

[AAAI 2024] DDRNet

This is the official PyTorch implementation of the paper Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera.

Contents

Introduction

<img src="./fig/teaser.png" width=100%>

Contribution

  • We propose a novel network with long- and short-term video representation learning by decoupling video degradations for the UDC video restoration task (D$^2$RNet), which is the first work to address UDC video degradation. The core decoupling attention module (DAM) enables a tailored solution to the degradation caused by different incident light intensities in the video.
  • We propose a large-scale UDC video restoration dataset (VidUDC33K), which includes numerous challenging scenarios. To the best of our knowledge, this is the first dataset for UDC video restoration.
  • Extensive quantitative and qualitative evaluations demonstrate the superiority of D$^2$RNet. In the proposed VidUDC33K dataset, D$^2$RNet gains 1.02db PSNR improvements more than other restoration methods.

Overview

<img src="./fig/overview.jpg" width=100%>

Visual

<img src="./fig/result.jpg" width=90%>

Dataset

  1. Download the original HDR video and real video from google drive and baidu drive(4k84) under ./dataset.
  2. Unzip the original HDR video and real video.
cd ./dataset
unzip Video.zip
unzip Real_Video.zip
  1. Generate the sequences for training and testing based on synthvideo_meta.txt and ZTE_new_psf_5.npy, run
python generate_synthvideo.py

The principle of obtaining synthetic dataset is as follows:

<img src="./fig/build_dataset.jpg" width=100%>
  1. Generate the sequences for real scenario validation based on realvideo_meta.txt, run
python generate_realdata.py
  1. Make VidUDC33K structure be:
        ├────dataset
                ├────VidUDC33K
                        ├────Input
                                ├────000
                                        ├────000.npy
                                        ├────...
                                        ├────049.npy
                                ├────001
                                ├────...
                                ├────676
                        ├────GT
                                ├────000
                                        ├────000.npy
                                        ├────...
                                        ├────049.npy
                                ├────001
                                ├────...
                                ├────676
                ├────VidUDC33K_real
                        ├────Input
                                ├────000
                                        ├────000.npy
                                        ├────...
                                        ├────049.npy
                                ├────001
                                ├────...
                                ├────009
                        ├────GT
                                ├────000
                                        ├────000.npy
                                        ├────...
                                        ├────049.npy
                                ├────001
                                ├────...
                                ├────009
                ├────synthvideo_meta.txt
                ├────realvideo_meta.txt
                ├────ZTE_new_psf_5.npy

The distribution of the dataset is as follows:

<img src="./fig/dataset.jpg" width=70%>

Test

  1. Clone this github repo
git clone https://github.com/ChengxuLiu/DDRNet.git
cd DDRNet
  1. Prepare testing dataset and modify "folder_lq" and "folder_lq" in ./test.py
  2. Run test
python test.py --save_result
  1. The result are saved in ./results

Train

  1. Clone this github repo
git clone https://github.com/ChengxuLiu/DDRNet.git
cd DDRNet
  1. Prepare training dataset and modify "dataroot_gt" and "dataroot_lq" in ./options/DDRNet/train_DDRNet.json
  2. Run training
python train.py --opt ./options/DDRNet/train_DDRNet.json
or
python -m torch.distributed.launch --nproc_per_node=4 --master_port=23333 train.py --opt ./options/DDRNet/train_DDRNet.json --dist True
  1. The models are saved in ./experiments

Results

The output results on VidUDC33K testing set can be downloaded from google drive and baidu drive(4k84).

Citation

If you find the code and pre-trained models useful for your research, please consider citing our paper. :blush:

@inproceedings{liu2024decoupling, 
    title = {Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera},
    author = {Liu, Chengxu and Wang, Xuan and Fan, Yuanting Fan and Li, Shuai and Qian, Xueming}, 
    booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence}, 
    year = {2024}
    }

Contact

If you meet any problems, please describe them in issues or contact:

Acknowledgement

The code of DDRNet is built upon RVRT, DISCNet, and MMagic, and we express our gratitude to these awesome projects.

Related Skills

docs-writer

99.0k

`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie

model-usage

335.4k

Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.

pr

for a github pr, please respond in the following format - ## What type of PR is this? - [ ] 🍕 Feature - [ ] 🐛 Bug Fix - [ ] 📝 Documentation - [ ] 🧑‍💻 Code Refactor - [ ] 🔧 Other ## Description <!-- What changed and why? Optional: include screenshots or other supporting artifacts. --> ## Related Issues <!-- Link issues like: Fixes #123 --> ## Updated requirements or dependencies? - [ ] Requirements or dependencies added/updated/removed - [ ] No requirements changed ## Testing - [ ] Tests added/updated - [ ] No tests needed **How to test or why no tests:** <!-- Describe test steps or explain why tests aren't needed --> ## Checklist - [ ] Self-reviewed the code - [ ] Tests pass locally - [ ] No console errors/warnings ## [optional] What gif best describes this PR?

arscontexta

2.9k

Claude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.

View on GitHub
GitHub Stars15
CategoryContent
Updated4mo ago
Forks1

Languages

Python

Security Score

87/100

Audited on Nov 5, 2025

No findings