SkillAgentSearch skills...

DiffCR

[TGRS 2024] DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal from Optical Satellite Images

Install / Use

/learn @XavierJiezou/DiffCR
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<div align="center"> <h1 align="center">DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal from Optical Satellite Images</h1> <p align="center">This repository is the official PyTorch implementation of the TGRS 2024 paper DiffCR.</p>

arXiv Paper Project Page HugginngFace Models HugginngFace Visualization

DiffCR

</div>

Requirements

To install dependencies:

pip install -r requirements.txt
<!-- >📋 Describe how to set up the environment, e.g. pip/conda/docker commands, download datasets, etc... -->

To download datasets:

Training

To train the models in the paper, run these commands:

python run.py -p train -c config/ours_sigmoid.json
<!-- >📋 Describe how to train the models, with example commands on how to train the models in your paper, including the full training procedure and appropriate hyperparameters. -->

Testing

To test the pre-trained models in the paper, run these commands:

python run.py -p test -c config/ours_sigmoid.json

Evaluation

To evaluate my models on two datasets, run:

python evaluation/eval.py -s [ground-truth image path] -d [predicted-sample image path]
<!-- >📋 Describe how to evaluate the trained models on benchmarks reported in the paper, give commands that produce the results (section below). -->

Pretrained Model Weights

You can download pretrained models here:

Visualization

The visualization results of 12 methods (including DiffCR) on the test sets of Sen2_MTC_Old and Sen2_MTC_New datasets, along with evaluation code for direct comparison by researchers, are available at: 🤗 HuggingFace Visualization

├── paper-report.png          ← reference metrics table from the paper
│
├── data/
│   ├── Sen2_MTC_New/
│   │   ├── GT/               ← 687 cloud-free ground-truth images  ({id}.png)
│   │   └── inputs/           ← 687 × 3 cloudy input images
│   │                            ({id}_A1.png  {id}_A2.png  {id}_A3.png)
│   └── Sen2_MTC_Old/
│       ├── GT/               ← 313 ground-truth images
│       └── inputs/           ← 313 × 3 cloudy inputs
│
├── results/
│   ├── Sen2_MTC_New/
│   │   ├── ae/               ← prediction images for each method ({id}.png)
│   │   ├── crtsnet/
│   │   ├── ctgan/
│   │   ├── ddpmcr/
│   │   ├── diffcr/           ← DiffCR [Ours]
│   │   ├── dsen2cr/
│   │   ├── mcgan/
│   │   ├── pix2pix/
│   │   ├── pmaa/
│   │   ├── stgan/
│   │   ├── stnet/
│   │   └── uncrtaints/
│   └── Sen2_MTC_Old/
│       └── (same 12 methods)
│
└── eval/
    ├── metrics.py            ← PSNR / SSIM / FID / LPIPS evaluation
    ├── plot.py               ← comparison figure generation
    └── requirements.txt      ← Python dependencies

Citation

If you use our code or models in your research, please cite with:

@ARTICLE{diffcr,
  author={Zou, Xuechao and Li, Kai and Xing, Junliang and Zhang, Yu and Wang, Shiying and Jin, Lei and Tao, Pin},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal From Optical Satellite Images}, 
  year={2024},
  volume={62},
  number={},
  pages={1-14},
}

Acknowledgments

Janspiry/Palette-Image-to-Image-Diffusion-Models

openai/guided-diffusion

Visit Count

XavierJiezou@DiffCR

View on GitHub
GitHub Stars78
CategoryDevelopment
Updated5d ago
Forks17

Languages

Python

Security Score

80/100

Audited on Mar 31, 2026

No findings