LiSA
[CVPR 2024 Highlight] LiSA: LiDAR Localization with Semantic Awareness
Install / Use
/learn @Ybchun/LiSAREADME
<a alighn="center" href="https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_LiSA_LiDAR_Localization_with_Semantic_Awareness_CVPR_2024_paper.pdf"><img src='https://img.shields.io/badge/CVF-Paper-blue' alt='Paper PDF'></a>
</p> <img src="img/trajectory_all_small.gif" alt="Description" width="400"/> </div>⚙️ Environment
- Spconv
conda install -f lisa-spconv.yaml
conda activate lisa-spconv
cd LiSA-spconv/third_party
python setup.py install
- MinkowskiEngine
conda install -f lisa-mink.yaml
🔨 Dataset
We support the Oxford Radar RobotCar and NCLT datasets right now.
We also use PQEE to enhance the Oxford and provide the corrected pose, QEOxford.
The data of the Oxford, QEOxford and NCLT dataset should be organized as follows:
- (QE)Oxford
data_root
├── 2019-01-11-14-02-26-radar-oxford-10k
│ ├── velodyne_left
│ │ ├── xxx.bin
│ │ ├── xxx.bin
│ │ ├── …
│ ├── sphere_velodyne_left_feature32
│ │ ├── xxx.bin
│ │ ├── xxx.bin
│ │ ├── …
│ ├── velodyne_left_calibrateFalse.h5
│ ├── velodyne_left_False.h5
│ ├── rot_tr.bin
│ ├── tr.bin
│ ├── tr_add_mean.bin
├── …
├── (QE)Oxford_pose_stats.txt
├── train_split.txt
├── valid_split.txt
- NCLT
data_root
├── 2012-01-22
│ ├── velodyne_left
│ │ ├── xxx.bin
│ │ ├── xxx.bin
│ │ ├── …
│ ├── sphere_velodyne_left_feature32
│ │ ├── xxx.bin
│ │ ├── xxx.bin
│ │ ├── …
│ ├── velodyne_left_False.h5
├── …
├── NCLT_pose_stats.txt
├── train_split.txt
├── valid_split.txt
The files used are provided in the dataset directory.
🎨 Data prepare
We use SphereFormer for data preprocessing (just used for training) and generate corresponding semantic feature. You need to download the code, put dataset.py into util and put get_seg_fearure.py into /.
🌟 Visualization
QEOxford

NCLT

💃 Run
train
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_addr 127.0.0.34 --master_port 29503 train_ddp.py
test
python test.py
🤗 Model zoo
The models of LiSA on Oxford, QEOxford, and NCLT can be downloaded here.
🙏 Acknowledgements
We appreciate the code of SGLoc, SphereFormer and DiffKD they shared.
🎓 Citation
If you find this codebase useful for your research, please use the following entry.
@inproceedings{yang2024lisa,
title={LiSA: LiDAR Localization with Semantic Awareness},
author={Yang, Bochun and Li, Zijun and Li, Wen and Cai, Zhipeng and Wen, Chenglu and Zang, Yu and Muller, Matthias and Wang, Cheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={15271--15280},
year={2024}
}
Related Skills
node-connect
349.7kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.7kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.7kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
