BRIGHT
[ESSD 2025 & IEEE DFC 2025 & CVPRW 2026] Bright: A globally distributed multimodal VHR dataset for all-weather disaster response
Install / Use
/learn @ChenHongruixuan/BRIGHTQuality Score
Category
Education & ResearchSupported Platforms
Tags
README
Hongruixuan Chen<sup>1,2</sup>, Jian Song<sup>1,2</sup>, Olivier Dietrich<sup>3</sup>, Clifford Broni-Bediako<sup>2</sup>, Weihao Xuan<sup>1,2</sup>, Junjue Wang<sup>1</sup>
Xinlei Shao<sup>1</sup>, Yimin Wei<sup>1,2</sup>, Junshi Xia<sup>3</sup>, Cuiling Lan<sup>4</sup>, Konrad Schindler<sup>3</sup>, Naoto Yokoya<sup>1,2 *</sup>
<sup>1</sup> The University of Tokyo, <sup>2</sup> RIKEN AIP, <sup>3</sup> ETH Zurich, <sup>4</sup> Microsoft Research Asia
Overview | Start BRIGHT | Common Issues | Follow-Ups | Others
</div>🛎️Updates
Notice☀️☀️: BRIGHT has been accepted by ESSD!! The contents related to IEEE GRSS DFC 2025 have been transferred to here!!Mar 25th, 2026: Bright challenge: advancing multimodal building damage mapping to instance level on CVPRW 2026 is now open. You can download the instance labels, run our baseline code and submit your results on Codabench page now!!Nov 18th, 2025: BRIGHT has been accepted by ESSD and online available now!!Aug 12th, 2025: BRIGHT has been integrated into TorChange. Many thanks for the effort of Dr. Zhuo Zheng!!May 05th, 2025: All the data and benchmark code related to our paper has now released. You are warmly welcome to use them!!Apr 28th, 2025: IEEE GRSS DFC 2025 Track II is over. Congratulations to winners!! You can now download the full version of DFC 2025 Track II data in Zenodo or HuggingFace!!Jan 18th, 2025: BRIGHT has been integrated into TorchGeo. Many thanks for the effort of Nils Lehmann!!Jan 13th, 2025: The arXiv paper of BRIGHT is now online. If you are interested in details of BRIGHT, do not hesitate to take a look!!
🔭Overview
-
BRIGHT is the first open-access, globally distributed, event-diverse multimodal dataset specifically curated to support AI-based disaster response. It covers five types of natural disasters and two types of man-made disasters across 14 disaster events in 23 regions worldwide, with a particular focus on developing countries.
-
It supports not only the development of supervised deep models, but also the testing of their performance on cross-event transfer setup, as well as unsupervised domain adaptation, semi-supervised learning, unsupervised change detection, and unsupervised image matching methods in multimodal and disaster scenarios.
🗝️Let's Get Started with BRIGHT!
A. Installation
Note that the code in this repo runs under Linux system. We have not tested whether it works under other OS.
Step 1: Clone the repository:
Clone this repository and navigate to the project directory:
git clone https://github.com/ChenHongruixuan/BRIGHT.git
cd BRIGHT
Step 2: Environment Setup:
It is recommended to set up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:
Create and activate a new conda environment
conda create -n bright-benchmark
conda activate bright-benchmark
Install dependencies
pip install -r requirements.txt
B. Data Preparation
Please download the BRIGHT from Zenodo or HuggingFace. Note that we cannot redistribute the optical data over Ukraine, Myanmar, and Mexico. Please follow our tutorial to download and preprocess them.
After the data has been prepared, please make them have the following folder/file structure:
${DATASET_ROOT} # Dataset root directory, for example: /home/username/data/bright
│
├── pre-event
│ ├──bata-explosion_00000000_pre_disaster.tif
│ ├──bata-explosion_00000001_pre_disaster.tif
│ ├──bata-explosion_00000002_pre_disaster.tif
│ ...
│
├── post-event
│ ├──bata-explosion_00000000_post_disaster.tif
│ ...
│
└── target
├──bata-explosion_00000000_building_damage.tif
...
C. Model Training & Tuning
The following commands show how to train and evaluate UNet on the BRIGHT dataset using our standard ML split set in [bda_benchmark/dataset/splitname/standard_ML]:
python script/standard_ML/train_UNet.py --dataset 'BRIGHT' \
--train_batch_size 16 \
--eval_batch_size 4 \
--num_workers 16 \
--crop_size 640 \
--max_iters 800000 \
--learning_rate 1e-4 \
--model_type 'UNet' \
--model_param_path '<your model checkpoint saved path>' \
--train_dataset_path '<your dataset path>' \
--train_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/train_set.txt' \
--val_dataset_path '<your dataset path>' \
--val_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/val_set.txt' \
--test_dataset_path '<your dataset path>' \
--test_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/test_set.txt'
D. Inference & Evaluation
Then, you can run the following code to generate raw & visualized prediction results and evaluate performance using the saved weight. You can also download our provided checkpoints from Zenodo.
python script/standard_ML/infer_UNet.py --model_path '<path of the checkpoint of model>' \
--test_dataset_path '<your dataset path>' \
--test_data_list_path '<your project path>/bda_benchmark/dataset/splitname/standard_ML/test_set.txt' \
--output_dir '<your inference results saved path>'
E. Other Benchmarks & Setup
In addition to the above supervised deep models, BRIGHT also provides standardized evaluation setups for several important learning paradigms and multimodal EO tasks:
-
Cross-event transfer setup: Evaluate model generalization across disaster types and regions. This setup simulates real-world scenarios where no labeled data (zero-shot) or limited labeled data (one-shot) is available for the target event during training. -
Unsupervised domain adaptation: Adapt models trained on source disaster events to unseen target events without any target labels, using UDA techniques under the zero-shot cross-event setting. -
Semi-supervised learning: Leverage a small number of labeled samples and a larger set of unlabeled samples from the target event to improve performance under the one-shot cross-event setting. -
Unsupervised multimodal change detection: Detect disaster-induced building changes without using any labels. This setup supports benchmarking of general-purpose change detection algorithms
