π οΈ Installation | π₯ Video | π Paper | π Dataset
LeSTA directly learns robot-specific traversability in a self-supervised manner by using a short period of manual driving experience.
- 2024.07.30: Our paper is accepted for presentation at IEEE ICRA@40 in Rotterdam, Netherlands
- 2024.02.29: Our paper is accepted by IEEE Robotics and Automation Letters (IEEE RA-L)
- 2024.02.19: We release the urban-traversability-dataset for learning terrain traversability in urban environments
-
C++ package for LeSTA with ROS interface (lesta_ros)
- Traversability label generation from LiDAR-reconstructed height map
- Traversability inference/mapping using a learned network
-
PyTorch scripts for training LeSTA model (pylesta)
Our project is built on ROS, successfully tested on the following setup.
- Ubuntu 20.04 / ROS Noetic
- PyTorch 2.2.2 / LibTorch 2.6.0
-
Install Grid Map library for height mapping:
sudo apt install ros-noetic-grid-map -y
-
Install LibTorch (choose one option):
CPU-only version (Recommended for easier setup)
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.6.0%2Bcpu.zip -P ~/Downloads sudo unzip ~/Downloads/libtorch-cxx11-abi-shared-with-deps-2.6.0+cpu.zip -d /opt rm ~/Downloads/libtorch-cxx11-abi-shared-with-deps-2.6.0+cpu.zip
GPU-supported version (e.g. CUDA 11.8)
# To be updated...
-
Build lesta_ros package:
cd ~/ros_ws/src git clone https://github.com/Ikhyeon-Cho/LeSTA.git cd .. catkin build lesta source devel/setup.bash
π‘ Notes:
- We recommend starting without GPU processing. The network effectively runs on a single CPU core.
- If you are interested in height map reconstruction, see height_mapping for more details.
-
Install PyTorch (choose one option):
CPU-only setup
We recommend using a virtual environment for PyTorch installation.
Conda
conda create -n lesta python=3.8 -y conda activate lesta conda install pytorch=2.2 torchvision cpuonly tensorboard -c pytorch -y
Virtualenv
virtualenv -p python3.8 lesta-env source lesta-env/bin/activate pip install torch==2.2 torchvision tensorboard --index-url https://download.pytorch.org/whl/cpu
CUDA setup
We recommend using a virtual environment for PyTorch installation.
Conda
conda create -n lesta python=3.8 -y conda activate lesta conda install pytorch=2.2 torchvision tensorboard cudatoolkit=11.8 -c pytorch -c conda-forge -y
Virtualenv
virtualenv -p python3.8 lesta-env source lesta-env/bin/activate pip install torch==2.2 torchvision tensorboard --index-url https://download.pytorch.org/whl/cu118
-
Install pylesta package:
# Make sure your virtual environment is activated cd LeSTA pip install -e pylesta
π³ If you are familiar with Docker, see here for easier CUDA environment setup.
You have two options:
- Train the traversability model with your own robot from scratch
- Use pre-trained model to predict traversability
β οΈ Note: For optimal performance, we highly recommend training the model with your own robot's data. The robot's unique sensor setup and motion dynamics are crucial for accurate traversability predictions, yet the configuration of our robot might differ from yours. For details on our settings, visit urban-traversability-dataset repo.
The entire training-to-deployment pipeline consists of three steps:
- Label Generation: Generate the traversability label from the dataset.
- Model Training: Train the traversability model with the generated labels.
- Traversability Estimation: Prediction/mapping of the terrain traversability with your own robot.
For rapid testing of the project, you can use checkpoints in #model-zoo and directly go to #traversability-estimation.
roslaunch lesta label_generation.launch
Note: See #sample datasets for example rosbag files.
rosbag play {your-rosbag}.bag --clock -r 3
rosservice call /lesta/save_label_map "training_set" "" # {filename} {directory}
The labeled height map will be saved as a single
training_set.pcd
file in the root directory of the package.
Note: See
pylesta/configs/lesta.yaml
for more training details.
# Make sure your virtual environment is activated
cd LeSTA
python pylesta/tools/train.py --dataset "training_set.pcd"
Configure model_path
variable in lesta_ros/config/*_node.yaml
with your model checkpoint.
- trav_prediction_node.yaml
- trav_mapping_node.yaml
Note: See #model-zoo for our pre-trained checkpoints.
We provide two options for traversability estimation:


Left: Robot-centric traversability prediction. Right: Real-time traversability mapping.
1. Traversability Prediction | 2. Traversability Mapping |
---|---|
|
|
How to run:
-
For traversability prediction:
roslaunch lesta traversability_prediction.launch
-
For traversability mapping:
roslaunch lesta traversability_mapping.launch
rosbag play {your-rosbag}.bag --clock -r 2
-
Download rosbag files to test the package. The datasets below are configured to run with the default settings:
-
Campus road Dataset [Google Drive]
-
Parking lot Dataset [Google Drive]
-
See urban-traversability-dataset repository for more data samples.
To be updated...
-
Artifacts from dynamic objects:
- We currently implemented a raycasting-based approach to remove artifacts from dynamic objects.
- This is crucial for accurate static terrain representation, which directly impacts prediction quality.
- Yet, not enough to handle all artifacts.
- We are working on more robust methods to detect and filter dynamic objects in real-time.
-
Performance degradation due to noisy height mapping:
- Traversability is learned and predicted from a dense height map.
- The dense height map is accomplished by concatenating many sparse LiDAR scans.
- A good SLAM / 3d pose estimation is required to get a good height map.
- In typical settings, FAST-LIO2, LIO-SAM, or CT-ICP are good starting points.
- We are working on improving the height mapping accuracy.
Thank you for citing our paper if this helps your research project:
Ikhyeon Cho, and Woojin Chung. 'Learning Self-Supervised Traversability With Navigation Experiences of Mobile Robots: A Risk-Aware Self-Training Approach', IEEE Robotics and Automation Letters, Feb. 2024.
@article{cho2024learning,
title={Learning Self-Supervised Traversability With Navigation Experiences of Mobile Robots: A Risk-Aware Self-Training Approach},
author={Cho, Ikhyeon and Chung, Woojin},
journal={IEEE Robotics and Automation Letters},
year={2024},
volume={9},
number={5},
pages={4122-4129},
doi={10.1109/LRA.2024.3376148}
}
You can also check the paper of our baseline:
Hyunsuk Lee, and Woojin Chung. 'A Self-Training Approach-Based Traversability Analysis for Mobile Robots in Urban Environments', IEEE International Conference on Robotics and Automation (ICRA), 2021.
@inproceedings{lee2021self,
title={A self-training approach-based traversability analysis for mobile robots in urban environments},
author={Lee, Hyunsuk and Chung, Woojin},
booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
pages={3389--3394},
year={2021},
organization={IEEE}
}
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
For any questions or feedback, feel free to contact us! or publish an issue on GitHub.
- Ikhyeon Cho : tre0430
at
korea.ac.kr