Flickerformer
The official code for the paper "It Takes Two: A Duet of Periodicity and Directionality for Burst Flicker Removal".
Install / Use
/learn @qulishen/FlickerformerREADME
✨ It Takes Two: A Duet of Periodicity and Directionality for Burst Flicker Removal
<!-- [](#flickerformer-architecture) [](https://www.kaggle.com/datasets/lishenqu/burstflicker) --> </div>🧭 Overview
Quick links: Motivation | Architecture | Results | Training | Citation
📖 Introduction
Flicker artifacts are caused by unstable illumination and row-wise exposure under the rolling-shutter mechanism, leading to structured spatial-temporal degradation.
Unlike common degradations such as noise or low-light, flicker exhibits two intrinsic properties: periodicity and directionality.
Flickerformer is a transformer-based framework for burst flicker removal, built on three key components:
- PFM (Phase-based Fusion Module): adaptively fuses burst features via inter-frame phase correlation;
- AFFN (Autocorrelation Feed-Forward Network): captures intra-frame periodic structures through autocorrelation;
- WDAM (Wavelet-based Directional Attention Module): uses directional high-frequency wavelet cues to guide low-frequency dark-region restoration.
The model suppresses flicker effectively while reducing ghosting artifacts, and achieves superior quantitative and visual performance compared with prior methods.
💡 Motivation
Flicker is not random noise. It is a structured degradation with explicit physical priors. As shown below, phase information is strongly related to flicker spatial distribution, and the rolling-shutter mechanism introduces directional stripe patterns.

🧠 Flickerformer Architecture
Flickerformer adopts a U-shaped encoder-decoder design and explicitly embeds periodicity and directionality priors:
- PFM + AFFN: periodicity-aware modeling in the frequency domain (inter-frame and intra-frame);
- WDAM: directionality-aware modeling in the spatial-wavelet domain (high-frequency guidance for low-frequency restoration).

🖼️ Qualitative Results
Across diverse flicker scenarios, Flickerformer localizes affected regions more precisely, restores illumination consistency, and preserves texture and color fidelity.

⚙️ Installation
- Install dependencies
cd Flickerformer
pip install -r requirements.txt
- Install
basicsrin the project root
python setup.py develop
📦 Dataset
BurstDeflicker: Kaggle Link
Recommended dataset structure:
dataset/
├── BurstFlicker-G
│ ├── train
│ │ ├── input
│ │ └── gt
│ └── test
└── BurstFlicker-S
├── train
│ ├── input
│ │ ├── 0001
│ │ │ ├── 0001.png
│ │ │ ├── 0002.png
│ │ │ └── ...
│ │ └── ...
│ └── gt
│ ├── 0001
│ └── ...
└── test
To convert mp4 videos into frames:
cd dataset
python cut.py
🚀 Training
bash ./dist_train.sh 2 options/Flickerformer.yml
✅ Testing and Evaluation
python test.py --input dataset/BurstFlicker-S/test-resize/input --output result/flickerformer --model_path Flickerformer.pth
python evaluate.py --input result/
📚 Citation
If you find this project useful, please cite:
@inproceedings{qu2026flickerformer,
title={It Takes Two: A Duet of Periodicity and Directionality for Burst Flicker Removal},
author={Qu, Lishen and Zhou, Shihao and Liang, Jie and Zeng, Hui and Zhang, Lei and Yang, Jufeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2026}
}
Related Skills
node-connect
352.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
