RSHazeDiff
Code for RSHazeDiff: A Unified Fourier-aware Diffusion Model for Remote Sensing lmage Dehazing
Install / Use
/learn @jm-xiong/RSHazeDiffREADME
RSHazeDiff: A Unified Fourier-aware Diffusion Model for Remote Sensing lmage Dehazing (TITS2024)
Jiamei Xiong, Xuefeng Yan, Yongzhen Wang, Wei Zhao, Xiao-Ping Zhang, Mingqiang Wei
News
- Dec 20, 2023: This repo is released.
- May 15, 2024: Arxiv paper is available.
- Nov 8, 2024: 😊 Paper is accepted by IEEE TITS2024.
- Jan 17, 2025: 🔈The code is available now, enjoy yourself!
- Jan 20, 2025: Updated README file with detailed instruciton.
<hr />Abstract: Haze severely degrades the visual quality of remote sensing images and hampers the performance of road extraction, vehicle detection, and traffic flow monitoring. The emerging denoising diffusion probabilistic model (DDPM) exhibits the significant potential for dense haze removal with its strong generation ability. Since remote sensing images contain extensive small-scale texture structures, it is important to effectively restore image details from hazy images. However, current wisdom of DDPM fails to preserve image details and color fidelity well, limiting its dehazing capacity for remote sensing images. In this paper, we propose a novel unified Fourier-aware diffusion model for remote sensing image dehazing, termed RSHazeDiff. From a new perspective, RSHazeDiff explores the conditional DDPM to improve image quality in dense hazy scenarios, and it makes three key contributions. First, RSHazeDiff refines the training phase of diffusion process by performing noise estimation and reconstruction constraints in a coarse-to-fine fashion. Thus, it remedies the unpleasing results caused by the simple noise estimation constraint in DDPM. Second, by taking the frequency information as important prior knowledge during iterative sampling steps, RSHazeDiff can preserve more texture details and color fidelity in dehazed images. Third, we design a global compensated learning module to utilize the Fourier transform to capture the global dependency features of input images, which can effectively mitigate the effects of boundary artifacts when processing fixed-size patches. Experiments on both synthetic and real-world benchmarks validate the favorable performance of RSHazeDiff over state-of-the-art methods.
Network Architecture
<img src = "https://imgur.la/images/2025/01/20/Overview.jpg">⭐If this work is helpful for you, please help star this repo. Thanks!🤗
Getting Started
Environment
Clone this repo:
git clone https://github.com/jm-xiong/RSHazeDiff.git
cd RSHazeDiff/
Create a new conda environment and install dependencies:
conda create -n rshazediff python=3.7
conda activate rshazediff
conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.7 -c pytorch -c nvidia
Prepare Datasets
You can download the datasets LHID & DHID (password: QW67) and RICE. Note that ERICE dataset is expanded on RICE1 by cutting the images to the size of 256 × 256 pixels without overlapping. Make sure the file structure is consistent with the following:
└── Dataset
├── ERICE
│ ├── Test
│ │ ├── GT
│ │ └── Haze
│ └── Train
│ ├── GT
│ └── Haze
└── HazyRemoteSensingDatasets
├── DHID
│ ├── TestingSet
│ │ └── Test
│ │ ├── GT
│ │ └── Haze
│ └── TrainingSet
│ ├── GT
│ └── Haze
└── LHID
├── TestingSet
│ └── Merge
│ ├── GT
│ └── Haze
└── TrainingSet
├── GT
└── Haze
Train
Run the following command to train the diffusion branch:
cd Diffusion_branch
python train_diffusion.py --config 'DHID.yml' --sampling_timesteps 10 --image_folder './results'
Then, run the following command to train the global branch:
cd Global_branch
python train.py --config 'DHID.yml' --image_folder './results'
Please note that the global branch requires the output images of diffusion branch as its input images.
Test
After training the diffusion branch, the weights can be saved in the ./checkpoints/. You can load this ckpt to obtain the output images of the diffusion branch.
cd Diffusion_branch
python eval_diffusion.py --config "DHID.yml" --resume 'DHID.pth.tar' --test_set 'DHID' --sampling_timesteps 10 --grid_r 16
Then, you use the output images of the diffusion branch as the input and load the ckpt generated by training the global branch to generate the final output.
cd Global_branch
python test.py --config "DHID.yml" --resume 'DHID.pth.tar'
Citation
Please cite us if our work is useful for your research.
@article{xiong2025rshazediff,
title={RSHazeDiff: A Unified Fourier-aware Diffusion Model for Remote Sensing lmage Dehazing},
author={Xiong, Jiamei and Yan, Xuefeng and Wang, Yongzhen and Wei Zhao and Xiao-Ping Zhang and Wei, Mingqiang},
journal={IEEE Transactions on Intelligent Transportation Systems},
volume={26},
issue={1},
pages={1055-1070},
year={2024},
doi={10.1109/TITS.2024.3487972},
publisher={IEEE}
}
Acknowledgement
This code is based on WeatherDiffusion. Thanks for their awesome work.
Contact
If you have any questions, feel free to approach me at jmxiong@nuaa.edu.cn
