LeSTA
Learning self-supervised traversability with navigation experiences of mobile robots: A risk-aware self-training approach (IEEE RA-L '24)
Install / Use
/learn @Ikhyeon-Cho/LeSTAREADME
LeSTA directly learns robot-specific traversability in a self-supervised manner by using a short period of manual driving experience.
</div>:loudspeaker: News & Updates
- 2024.07.30: Our paper is accepted for presentation at IEEE ICRA@40 in Rotterdam, Netherlands
- 2024.02.29: Our paper is accepted by IEEE Robotics and Automation Letters (IEEE RA-L)
- 2024.02.19: We release the urban-traversability-dataset for learning terrain traversability in urban environments
:rocket: What's in this repo
-
C++ package for LeSTA with ROS interface (lesta_ros)
- Traversability label generation from LiDAR-reconstructed height map
- Traversability inference/mapping using a learned network
-
PyTorch scripts for training LeSTA model (pylesta)
:hammer_and_wrench: Installation
Our project is built on ROS, successfully tested on the following setup.
- Ubuntu 20.04 / ROS Noetic
- PyTorch 2.2.2 / LibTorch 2.6.0
lesta_ros
-
Install Grid Map library for height mapping:
sudo apt install ros-noetic-grid-map -y -
Install LibTorch (choose one option):
<details> <summary><b>CPU-only version (Recommended for easier setup)</b></summary>
</details> <details> <summary>GPU-supported version (e.g. CUDA 11.8)</summary>wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.6.0%2Bcpu.zip -P ~/Downloads sudo unzip ~/Downloads/libtorch-cxx11-abi-shared-with-deps-2.6.0+cpu.zip -d /opt rm ~/Downloads/libtorch-cxx11-abi-shared-with-deps-2.6.0+cpu.zip
</details># To be updated... -
Build lesta_ros package:
cd ~/ros_ws/src git clone https://github.com/Ikhyeon-Cho/LeSTA.git cd .. catkin build lesta source devel/setup.bash
:bulb: Notes:
- We recommend starting without GPU processing. The network effectively runs on a single CPU core.
- If you are interested in height map reconstruction, see height_mapping for more details.
pylesta
-
Install PyTorch (choose one option):
<details> <summary><b>CPU-only setup</b></summary> <br>We recommend using a virtual environment for PyTorch installation.
Conda
conda create -n lesta python=3.8 -y conda activate lesta conda install pytorch=2.2 torchvision cpuonly tensorboard -c pytorch -yVirtualenv
</details> <details> <summary>CUDA setup</summary> <br>virtualenv -p python3.8 lesta-env source lesta-env/bin/activate pip install torch==2.2 torchvision tensorboard --index-url https://download.pytorch.org/whl/cpuWe recommend using a virtual environment for PyTorch installation.
Conda
conda create -n lesta python=3.8 -y conda activate lesta conda install pytorch=2.2 torchvision tensorboard cudatoolkit=11.8 -c pytorch -c conda-forge -yVirtualenv
</details>virtualenv -p python3.8 lesta-env source lesta-env/bin/activate pip install torch==2.2 torchvision tensorboard --index-url https://download.pytorch.org/whl/cu118 -
Install pylesta package:
# Make sure your virtual environment is activated cd LeSTA pip install -e pylesta
:whale: If you are familiar with Docker, see here for easier CUDA environment setup.
:rocket: Run the package
You have two options:
- Train the traversability model with your own robot from scratch
- Use pre-trained model to predict traversability
<br>⚠️ Note: For optimal performance, we highly recommend training the model with your own robot's data. The robot's unique sensor setup and motion dynamics are crucial for accurate traversability predictions, yet the configuration of our robot might differ from yours. For details on our settings, visit urban-traversability-dataset repo.
The entire training-to-deployment pipeline consists of three steps:
- Label Generation: Generate the traversability label from the dataset.
- Model Training: Train the traversability model with the generated labels.
- Traversability Estimation: Prediction/mapping of the terrain traversability with your own robot.
<br>For rapid testing of the project, you can use checkpoints in #model-zoo and directly go to #traversability-estimation.
1. Label Generation
Launch ROS node
roslaunch lesta label_generation.launch
Generate labels with rosbag
Note: See #sample datasets for example rosbag files.
rosbag play {your-rosbag}.bag --clock -r 3
Save traversability labels
rosservice call /lesta/save_label_map "training_set" "" # {filename} {directory}
<br>The labeled height map will be saved as a single
training_set.pcdfile in the root directory of the package.
2. Model Training
Launch training script with parameters
Note: See
pylesta/configs/lesta.yamlfor more training details.
# Make sure your virtual environment is activated
cd LeSTA
python pylesta/tools/train.py --dataset "training_set.pcd"
<br>
3. Traversability Estimation
Prerequisites
Configure model_path variable in lesta_ros/config/*_node.yaml with your model checkpoint.
- trav_prediction_node.yaml
- trav_mapping_node.yaml
Note: See #model-zoo for our pre-trained checkpoints.
Launch ROS node
We provide two options for traversability estimation:
<div align="center"> <img src="assets/Traversability Prediction.gif" width="45%" alt="Traversability Prediction"> <img src="assets/Traversability Mapping.gif" width="45%" alt="Traversability Mapping"> <p><i>Left: Robot-centric traversability prediction. Right: Real-time traversability mapping.</i></p> <table> <tr> <th width="400">1. Traversability Prediction</th> <th width="400">2. Traversability Mapping</th> </tr> <tr> <td> <ul> <li><strong>Robot-centric local traversability</strong></li> <li>Suitable for local motion planning</li> <li>Score traversability from model inference</li> </ul> </td> <td> <ul> <li><strong>Global traversability mapping</strong></li> <li>Suitable for global path planning</li> <li>Update traversability scores over time</li> </ul> </td> </tr> </table> </div>How to run:
-
For traversability prediction:
roslaunch lesta traversability_prediction.launch -
For traversability mapping:
roslaunch lesta traversability_mapping.launch
Test the node with rosbag
rosbag play {your-rosbag}.bag --clock -r 2
Sample datasets
-
Download rosbag files to test the package. The datasets below are configured to run with the default settings:
-
Campus road Dataset [Google Drive]
-
Parking lot Dataset [Google Drive]
-
See urban-traversability-dataset repository for more data samples.
Model Zoo
<table> <tr> <th>Model</th> <th>Description</th> <th>Environment</th> <th>Features</th> <th>Download</th> </tr> <tr> <td><strong>LeSTA-parking-lot</strong></td> <td>The model trained on parking lot dataset</td> <td>Urban (parking lot with low-height curbs)</td> <td> <ul> <li>Step</li> <li>Slope</li> <li>Roughness</li> <li>Curvature</li> </ul> </td>Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
