SkillAgentSearch skills...

SPIRAL

[NeurIPS 2025] SPIRAL: Semantic-Aware Progressive LiDAR Scene Generation and Understanding

Install / Use

/learn @worldbench/SPIRAL

README

<!-- <p align="right">English | <a href="./README_CN.md">简体中文</a></p> --> <p align="center"> <img src="images/spiral.png" width="12.5%" align="center"> <h1 align="center"> <strong>Spiral: Semantic-Aware Progressive LiDAR Scene Generation and Understanding</strong> </h1> <p align="center"> <a href="https://dekai21.github.io/" target="_blank">Dekai Zhu*</a>&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://yixuanhu1.github.io/" target="_blank">Yixuan Hu*</a>&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://scholar.google.com/citations?hl=en&user=J9a48hMAAAAJ&view_op=list_works" target="_blank">Youquan Liu</a>&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://dylanorange.github.io/" target="_blank">Dongyue Lu</a>&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://ldkong.com/" target="_blank">Lingdong Kong</a>&nbsp;&nbsp;&nbsp;&nbsp; <a href="https://scholar.google.de/citations?hl=en&user=ELOVd8sAAAAJ&view_op=list_works&sortby=pubdate" target="_blank">Slobodan Ilic</a><br> <span class="eql-cntrb"><small>(* Equal Contribution)</small> <br><b>NeurIPS 2025</b> </p> <p align="center"> <a href="https://arxiv.org/abs/2505.22643" target='_blank'> <img src="https://img.shields.io/badge/Paper-%F0%9F%93%96-darkred"> </a>&nbsp; <a href="https://dekai21.github.io/SPIRAL/" target='_blank'> <img src="https://img.shields.io/badge/Project-%F0%9F%94%97-blue"> </a>&nbsp; <!-- <a href="https://huggingface.co/datasets/Pi3DET/data" target='_blank'> <img src="https://img.shields.io/badge/Dataset-%F0%9F%94%97-green"> </a>&nbsp; --> <!-- <a href="" target='_blank'> <img src="https://visitor-badge.laobi.icu/badge?page_id=pi3det.Pi3EDT"> </a> --> </p>

| <img src="./images/teaser.png" alt="Teaser" width="100%"> | | :-: |

Existing LiDAR generative models are limited to producing unlabeled LiDAR scenes, lacking any semantic annotations. Performing post-hoc labeling on these generated scenes requires additional pretrained segmentation models, which introduces extra computational overhead. Moreover, such after-the-fact annotation yields suboptimal segmentation quality.

To address this issue, we make the following contributions:

  • We propose a novel state-of-the-art semantic-aware range-view LiDAR diffusion model, Spiral, which jointly produces depth and reflectance images along with semantic labels.
  • We introduce unified evaluation metrics that comprehensively evaluate the geometric, physical, and semantic quality of generated labeled LiDAR scenes.
  • We demonstrate the effectiveness of the generated LiDAR scenes for training segmentation models, highlighting Spiral's potential for generative data augmentation.

:books: Citation

If you find this work helpful for your research, please kindly consider citing our paper:

@inproceedings{zhu2025spiral,
    title     = {Spiral: Semantic-Aware Progressive LiDAR Scene Generation and Understanding},
    author    = {Zhu, Dekai and Hu, Yixuan and Liu, Youquan and Lu, Dongyue and Kong, Lingdong and Ilic, Slobodan},
    booktitle = {The Thirty-ninth Annual Conference on Neural Information Processing Systems},
    year      = {2025}
}

Updates

  • [11/2025] - The code for Spiral is released. :rocket:
  • [10/2025] - The project page is online. :rocket:
  • [09/2025] - This work has been accepted to NeurIPS 2025.

:gear: Installation

For details related to installation and environment setups, please run:

conda env create -f environment.yaml
conda activate spiral

If you are stuck with an endless installation, try:

mamba env create -f environment.yaml
conda activate spiral

:hotsprings: Data Preparation

We use the official SemanticKITTI API to preprocess the data by projecting the LiDAR data from Cartesian coordinates into range images. You can download the preprocessed data here. :hugs:

:rocket: Getting Started

First, specify the data_path in utils/option.py to point to the directory of the preprocessed data. Then simply run:

python train.py

to start the training.

<!-- ## License This work is under the <a rel="license" href="https://www.apache.org/licenses/LICENSE-2.0">Apache License Version 2.0</a>, while some specific implementations in this codebase might be with other licenses. Kindly refer to [LICENSE.md](docs/LICENSE.md) for a more careful check, if you are using our code for commercial matters. -->

Acknowledgements

This work is developed based on the R2DM codebase.

Related Skills

View on GitHub
GitHub Stars43
CategoryDevelopment
Updated1mo ago
Forks3

Languages

Python

Security Score

95/100

Audited on Feb 22, 2026

No findings