HazeFlow
HazeFlow: Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing [ICCV 2025]
Install / Use
/learn @cloor/HazeFlowREADME
HazeFlow : Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing (ICCV2025)
<div style="display: flex; justify-content: space-between; align-items: baseline;"> <h2 style="color: gray; margin: 0;">Authors </h2> </div> <h3 style="margin-top: 0;"> <a href="https://junsung6140.github.io/">Junseong Shin*</a>, <a href="https://cloor.github.io/">Seungwoo Chung*</a>, Yunjeong Yang, <a href="https://sites.google.com/view/lliger9">Tae Hyun Kim<sup>†</sup></a> </h3> <h4><sub><sup>(* denotes equal contribution. <sup>†</sup> denotes corresponding author.)</sup></sub></h4> <p align="center"> <img src="assets/ASM5.png" alt="hazeflow" width="800"/> </p>This is the official implementation of ICCV2025 "HazeFlow: Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing" [paper] / [project page]
Results
<p align="center"> <img src="assets/result.png" alt="result" width="800"/> </p>More qualitative and quantitative results can be found on the [project page].
📦 Installation
git clone https://github.com/cloor/HazeFlow.git
cd HazeFlow
pip install -r requirements.txt
or
git clone https://github.com/cloor/HazeFlow.git
cd HazeFlow
conda env create -f environment.yaml
Checkpoints can be downloaded here.
Visual Results can be downloaded here.
🌫️ Haze Generation
<p align="center"> <img src="assets/mcbm.png" alt="mcbm" width="800"/> <br> <b>Figure:</b> Example of non-homogeneous haze synthesized via MCBM. (a) Generated hazy image. (b) Transmission map <code>T<sub>MCBM</sub></code>. (c) Spatially varying density coefficient map <code>𝛽̃</code>. </p>You can generate haze density maps using MCBM by running the command below:
python haze_generation/brownian_motion_generation.py
🏋️ Training
📁 Dataset Preparation
Please download and organize the datasets as follows:
| Dataset | Description | Download Link | |-----------|---------------------------------------------------------|----------------| | RIDCP500 | 500 clear RGB images | rgb_500 / da_depth_500 | | RTTS | Real-world task-driven testing set | Link | | URHI | Urban and rural haze images (duplicate-removed version) | Link |
HazeFlow/
├── datasets/
│ ├── RIDCP500/
│ │ ├── rgb_500/
│ │ ├── da_depth_500/
│ │ ├── MCBM/
│ ├── RTTS/
│ ├── URHI/
│ └── custom/
Before training, make sure the datasets are properly structured as shown above.
Additionally, prepare the MCBM-based haze density maps and corresponding depth maps.
To estimate depth maps, follow the instructions provided in the Depth Anything V2 repository and place the depth maps in the datasets/RIDCP500/da_depth_500/ directory.
Once depth maps are ready, you can proceed to training and inference as described below.
1. Pretrain Phase
We propose using a color loss to reduce color distortion.
You can configure the loss type by editing --config.training.loss_type in pretrain.sh.
sh pretrain.sh
2. Reflow Phase
Specify the pretrained checkpoint from the pretrain phase by editing --config.flow.pre_train_model in reflow.sh.
sh reflow.sh
3. Distillation Phase
Specify the checkpoint obtained from the reflow phase by editing --config.flow.pre_train_model in distill.sh.
sh distill.sh
Inference & Demo
To run inference on your own images, place them in the dataset/custom/ directory.
Then, configure the following options in sampling.sh:
--config.sampling.ckpt: path to your trained model checkpoint--config.data.dataset: name of your dataset (rttsorcustom)--config.data.test_data_root: path to your input images
Finally, run:
sh sampling.sh
🔗 Acknowledgements
Our implementation is based on RectifiedFlow and SlimFlow. We sincerely thank the authors for their contributions to the community.
📚 Citation
If you use this code or find our work helpful, please cite our paper:
@inproceedings{shin2025hazeflow,
title={HazeFlow: Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing},
author={Shin, Junseong and Chung, Seungwoo and Yang, Yunjeong and Kim, Tae Hyun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={6263--6272},
year={2025}
}
Contact
If you have any questions, please contact junsung6140@hanyang.ac.kr.
