TripleMixer
[TIP 2025] TripleMixer: A Triple-Domain Mixing Model for Point Cloud Denoising under Adverse Weather
Install / Use
/learn @Grandzxw/TripleMixerREADME
Abstract
Adverse weather conditions such as snow, fog, and rain pose significant challenges to LiDAR-based perception models by introducing noise and corrupting point cloud measurements. To address this issue, we make the following three contributions:
- Point cloud denoising network: we propose TripleMixer, a robust and efficient point cloud denoising network that integrates spatial, frequency, and channel-wise processing through three specialized mixer modules. TripleMixer can be seamlessly deployed as a plug-and-play module within existing LiDAR perception pipelines;
- Large-scale adverse weather datasets: we construct two large-scale simulated datasets, Weather-KITTI and Weather-NuScenes, covering diverse weather scenarios with dense point-wise semantic and noise annotations;
- LiDAR perception benchmarks: we establish four benchmarks: Denoising, Semantic Segmentation (SS), Place Recognition (PR), and Object Detection (OD). These benchmarks enable systematic evaluation of denoising generalization, transferability, and downstream impact under both simulated and real-world adverse weather conditions.
Updates
- 09/25/2025: Our paper has been accepted by IEEE TIP! 🎉🎉
- 08/22/2025: All codes and configurations have been updated!
- 12/26/2024: The Weather-KITTI and Weather-NuScenes datasets are publicly available on the BaiduPan platform!
- Weather-KITTI: Download link (code:
xxr1) - Weather-NuScenes: Download link (code:
musq)
- Weather-KITTI: Download link (code:
- 24/08/2024: Initial release and submitted to the Journal. The dataset will be open source soon!
Outline
- Dataset
- Denoising Network
- LiDAR Perception Benchmarks
- Installation
- Training and Evaluation
- Dataset Generation
- TODO List
- Citation
- License
- Acknowledgements
Dataset
1) Overview
Our Weather-KITTI and Weather-NuScenes are based on the SemanticKITTI and nuScenes-lidarseg datasets, respectively. These datasets cover three common adverse weather conditions: rain, fog, and snow and retain the original LiDAR acquisition information and provide point-level semantic labels for rain, fog, and snow. The visualization results are shown below:
<p align="center"> <img src="figs/combined.png" width="50%" height="400px"> </p>2) Dataset Statistics
<p align="center"> <img src="figs/frames.png" width="85%"> </p> <p align="center"> <img src="figs/kitti_semantic.png" width="85%"> </p>Denoising Network
1) Overview
We propose TripleMixer, a plug-and-play point cloud denoising network that integrates spatial, frequency, and channel-wise processing through three specialized mixer layers. TripleMixer enables interpretable and robust denoising under adverse weather conditions, and can be seamlessly integrated into existing LiDAR perception pipelines to enhance their robustness. The overview of the proposed TripleMixer denoising network is shown below:
<p align="center"> <img src="figs/triplemixer.png" width="95%"> </p>2) Results Visualization
<p align="center"> <img src="figs/denoise-vis.png" width="95%"> </p>LiDAR Perception Benchmarks
We establish a Denoising benchmark to evaluate the performance of our denoising model and introduce three downstream LiDAR perception benchmarks: Semantic Segmentation (SS), Place Recognition (PR), and Object Detection (OD), to assess the generalization of state‑of‑the‑art perception models under adverse weather and the effectiveness of our denoising model as a preprocessing step. Notably, in all downstream benchmarks, our denoising model is trained in a supervised manner solely on our Weather‑KITTI and Weather‑NuScenes datasets using only point‑wise weather labels. Meanwhile, all perception models are directly tested on real‑world adverse‑weather datasets without any retraining or fine‑tuning.
1) Denoising
<p align="center"> <img src="figs/kitti-denoise.png" width="95%"> </p> <p align="center"> <img src="figs/nus-denoise.png" width="95%"> </p>2) Semantic Segmentation (SS)
-
Segmentation model selection:
- SphereFormer, CVPR 2023. <sup>
[Code]</sup>, - SFPNet, ECCV 2024. <sup>
[Code]</sup>, - PointTransformerV3, CVPR 2024. <sup>
[Code]</sup>,
- SphereFormer, CVPR 2023. <sup>
-
Benchmarks Results:
3) Place Recognition (PR)
-
Place Recognition model selection:
-
Benchmarks Results:
4) Object Detection (OD)
-
Detection model selection:
-
Benchmarks Results:
Installation
We use the following environment:
conda create -n triplemixer
conda activate triplemixer
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
pip install pyaml==23.12.0 tqdm==4.63.0 scipy==1.8.0 tensorboard==2.16.2
git clone https://github.com/Grandzxw/TripleMixer
cd TripleMixer
pip install -r requirements.txt
Training and Evaluation
1) Training
To train the Wads dataset, run:
python launch_train.py \
--dataset snow_wads \
--path_dataset /path/to/wads/ \
--log_path ./pretrained_models/wads/ \
--config ./configs/Wads.yaml \
--gpu 2 \
--fp16
For other datasets, make the corresponding modifications accordingly.
2) Evaluation and Test
Pre-trained models can be downloaded from Download link
We follow the data preprocessing pipeline of 3D_OutDet (https://github.com/sporsho/3D_OutDet). Before evaluation, please run ./datasets/remove_duplicate.py to remove duplicate point cloud data, and then remap the original labels of the WADS dataset to make them compatible with TripleMixer.
To evaluate the Wads dataset, run:
cd test
python eval_wads.py \
--path_dataset /root/WADS \
--ckpt ./logs/wads/ckpt_best.pth \
--config ./configs/Wads.yaml \
--result_folder ./result/predictions_wads \
--phase test \
--num_workers 12
To test the Wads dataset IOU, run:
cd test
python test_iou_wads.py
For other datasets, make the corresponding modifications accordingly.
Dataset Generation
You can generate your own Adverse Weather Dataset on other LiDAR-based point cloud datasets using the code provided in the tools directory of this repository!
TODO List
- [x] Initial release. 🚀
- [x] Add download links for Weather-KITTI and Weather-NuScenes.
- [x] Add Denoising Network code.
- [x] Add train and evaluation script on Adverse Weather Dataset.
- [x] Release checkpoints.
- [ ] ...
Citation
If you find our work useful in your res
