SkillAgentSearch skills...

LiSA

[CVPR 2024 Highlight] LiSA: LiDAR Localization with Semantic Awareness

Install / Use

/learn @Ybchun/LiSA
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <h2 align="center">LiSA: LiDAR Localization with Semantic Awareness</h2> <h3 align="center">CVPR 2024 Highlight</h3> <div align="center">

<a alighn="center" href="https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_LiSA_LiDAR_Localization_with_Semantic_Awareness_CVPR_2024_paper.pdf"><img src='https://img.shields.io/badge/CVF-Paper-blue' alt='Paper PDF'></a>

</p> <img src="img/trajectory_all_small.gif" alt="Description" width="400"/> </div>

⚙️ Environment

  • Spconv
conda install -f lisa-spconv.yaml
conda activate lisa-spconv
cd LiSA-spconv/third_party
python setup.py install
  • MinkowskiEngine
conda install -f lisa-mink.yaml

🔨 Dataset

We support the Oxford Radar RobotCar and NCLT datasets right now.

We also use PQEE to enhance the Oxford and provide the corrected pose, QEOxford.

The data of the Oxford, QEOxford and NCLT dataset should be organized as follows:

  • (QE)Oxford
data_root
├── 2019-01-11-14-02-26-radar-oxford-10k
│   ├── velodyne_left
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── sphere_velodyne_left_feature32
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── velodyne_left_calibrateFalse.h5
│   ├── velodyne_left_False.h5
│   ├── rot_tr.bin
│   ├── tr.bin
│   ├── tr_add_mean.bin
├── …
├── (QE)Oxford_pose_stats.txt
├── train_split.txt
├── valid_split.txt
  • NCLT
data_root
├── 2012-01-22
│   ├── velodyne_left
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── sphere_velodyne_left_feature32
│   │   ├── xxx.bin
│   │   ├── xxx.bin
│   │   ├── …
│   ├── velodyne_left_False.h5
├── …
├── NCLT_pose_stats.txt
├── train_split.txt
├── valid_split.txt

The files used are provided in the dataset directory.

🎨 Data prepare

We use SphereFormer for data preprocessing (just used for training) and generate corresponding semantic feature. You need to download the code, put dataset.py into util and put get_seg_fearure.py into /.

🌟 Visualization

QEOxford

image

NCLT

image

💃 Run

train

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_addr 127.0.0.34 --master_port 29503 train_ddp.py

test

python test.py

🤗 Model zoo

The models of LiSA on Oxford, QEOxford, and NCLT can be downloaded here.

🙏 Acknowledgements

We appreciate the code of SGLoc, SphereFormer and DiffKD they shared.

🎓 Citation

If you find this codebase useful for your research, please use the following entry.

@inproceedings{yang2024lisa,
  title={LiSA: LiDAR Localization with Semantic Awareness},
  author={Yang, Bochun and Li, Zijun and Li, Wen and Cai, Zhipeng and Wen, Chenglu and Zang, Yu and Muller, Matthias and Wang, Cheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={15271--15280},
  year={2024}
}

Related Skills

View on GitHub
GitHub Stars55
CategoryDevelopment
Updated1mo ago
Forks1

Languages

Python

Security Score

80/100

Audited on Mar 6, 2026

No findings