AVDNet
[IEEE Signal Processing Letters, 2025] Adaptive Video Demoiréing Network with Subtraction-Guided Alignment
Install / Use
/learn @Oksta1002/AVDNetREADME
AVDNet
[IEEE Signal Processing Letters, 2025] Adaptive Video Demoiréing Network with Subtraction-Guided Alignment
Seung-Hun Ok, Young-Min Choi, Seung-Wook Kim, Se-Ho Lee
Paper | Supplementary Materials
Introduction
<p align="center"> <img src="network.png" alt="AVDNet Architecture"/> </p> We propose an adaptive video demoiréing network (AVDNet) that effectively suppresses moiré artifacts while preserving temporal consistency. AVDNet transforms moiré-contaminated frames into temporally consistent clean frames by employing two key components: the adaptive bandpass block (ABB) and the subtraction-guided alignment block (SGAB). First, ABB applies an adaptive bandpass filter (ABF) to each frame, modulated by input-specific coefficients to selectively attenuate moiré frequencies based on the spectral distribution of the input. Then, SGAB aligns consecutive frames by exploiting subtraction maps, which effectively suppresses the propagation of moiré artifacts across time. Experimental results show that AVDNet outperforms existing video demoiréing methods, while maintaining a compact and efficient network architecture.Environment
The experiments were conducted using the following software environment and libraries:
- Python: 3.11.4
- CUDA: 12.1
- PyTorch: 2.1.0
- Torchvision: 0.16.0
- numpy: 1.26.4
- scikit-image
- opencv-python
- deepspeed
- lpips
- tensorboard
- wandb
Dataset
Our project is based on the VDmoire dataset, which can be downloaded from here.
After downloading the dataset, place the folders as follows:
project_root/
├── AVDNet/
│ ├── experiments/...
│ │ ...
│ └── train.py
└── datasets/
├── homo/...
└── optical/
├── iphone/...
└── tcl/...
Pretrained Models
You can download the pretrained model for testing from here.
After downloading the model, place it in the AVDNet/experiments/ directory before running the test.
Note: If you use the pretrained model provided, be sure to set
strict_load: falsein the test option file, as some class names differ slightly.
Train/Test
The following is an example command for training on the iPhone-V1 subset using GPU 0:
CUDA_VISIBLE_DEVICES=0 python train.py -opt options/train/Train_ipv1.yml
The following is an example command for testing on the TCL-V2 subset using GPU 3:
CUDA_VISIBLE_DEVICES=3 python test.py -opt options/test/Test_tclv2.yml
You can run training or testing by selecting the appropriate .yml configuration file and specifying the GPU to use.
Results
<p align="center"> <img src="results.png" alt="Results"/> </p>Citation
Please cite the following paper if you use this code in your research:
@article{ok2025adaptive,
title = {Adaptive Video Demoiréing Network With Subtraction-Guided Alignment},
author = {Ok, Seung-Hun and Choi, Young-Min and Kim, Seung-Wook and Lee, Se-Ho},
journal = {IEEE Signal Processing Letters},
volume = {32},
pages = {2733--2737},
year = {2025}
}
Acknowledgement
Our work and implementation were inspired by MBCNN and DTNet.
We sincerely thank the authors for making their code publicly available.
Contact
For any questions, please contact: cornking123@jbnu.ac.kr
Related Skills
qqbot-channel
349.7kQQ 频道管理技能。查询频道列表、子频道、成员、发帖、公告、日程等操作。使用 qqbot_channel_api 工具代理 QQ 开放平台 HTTP 接口,自动处理 Token 鉴权。当用户需要查看频道、管理子频道、查询成员、发布帖子/公告/日程时使用。
docs-writer
100.4k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
349.7kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Design
Campus Second-Hand Trading Platform \- General Design Document (v5.0 \- React Architecture \- Complete Final Version)1\. System Overall Design 1.1. Project Overview This project aims t
