Skip to content
/ olcmr Public

Online Localisation and Colored Mesh Reconstruction Architecture for 3D Visual Feedback in Robotic Exploration Missions

Notifications You must be signed in to change notification settings

onera/olcmr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lidar-based Online Localisation and Colored Mesh Reconstruction ROS2/ROS1 Architecture

This repository contains a hybrid ROS2/ROS1 architecture for LiDAR based SLAM and real-time 3D colored mesh reconstruction using TSDF for ground exploration robots.

Architecture Overview

The robot estimates its localisation and builds a global map of its environment in the form of a LiDAR point cloud through the implementation of the lidarslam_ros2 SLAM architecture on ROS2 from #rsasaki0109, using an EKF fusion between wheeled odometry and IMU measurements as localisation prior.

Coloration of the LiDAR point cloud is performed with RGB camera images geometric projection, using their intrinsic and extrinsic parameters estimated with the ROS package Kalibr from #ethz-asl

Online TSDF surface reconstruction is performed under ROS1 Noetic using the Voxblox package from #ethz-asl. Data transfert between ROS1 and ROS2 nodes is ensured by a ROS1/ROS2 bridge.

The architecture has been evaluated on the Quad-Easy trajectory of the 2021 Newer College dataset. The rosbag used for testing the architecture can be downloaded on the Newer College dataset website

OLCMR architecture main components and data transport

---

Requirements :

  • Ubuntu 20.04
  • ROS2 Galactic
  • ROS1 Noetic
  • Rviz and Rviz2

A fork from lidarslam_ros2 with minor adaptations is included as a git submodule.

Installation

Package dependencies :

ros1_bridge, perception_pcl ros2, robot_localization ros2:

sudo apt install ros-galactic-ros1-bridge ros-galactic-perception-pcl ros-galactic-robot-localization

g2o library :

cd ~
git clone https://github.com/RainerKuemmerle/g2o.git
cd g2o
mkdir build
cd build
cmake ../
make
sudo make install

Clone and build repository :

cd ~
git clone https://github.com/onera/olcmr --recurse-submodules
cd olcmr
source /opt/ros/galactic/setup.bash
colcon build --packages-skip scanmatcher graph_based_slam lidarslam --symlink-install
source install/setup.bash
colcon build --symlink-install

Install Voxblox package

Follow Voxblox installation instructions then move the custom voxblox launch files and rviz config from this package to voxblox_ros directory :

cp -r voxblox_files/* ~/<your_noetic_ws>/src/voxblox/voxblox_ros

Running OLCMR With the Newer College dataset

All the parameters of OLCMR components are located in the src/olcmr_pkg/params folder, they are described in this folder's README file. The following instructions allow to run the complete OLCMR architecture on the Newer College dataset and on our experimental dataset.

First make sure to set the "bag_file" param in the src/olcmr_pkg/params/robot_configs/newer_college_2021.yaml to your bag file path.

Terminal A (ROS1 : TSDF and Rviz1 launch) :
Run the TSDF node and visualize the result with Rviz.

source /opt/ros/noetic/setup.bash
source ~/<your_noetic_ws>/devel/setup.bash
roslaunch voxblox_ros newer_college_tsdf_colored.launch

Terminal B (ROS2 launch) :
Run the architecture with the example dataset and bridge to ROS1.

source /opt/ros/noetic/setup.bash
source /opt/ros/galactic/setup.bash
source ~/olcmr/install/setup.bash
ros2 launch olcmr_pkg olcmr_newer_college.launch.py 

Saving OLCMR outputs

  • To save the resulting 3D mesh, type in a ROS1 sourced terminal :
rosservice call /voxblox_node/generate_mesh
  • To save the SLAM pointcloud maps as ply files, type in a ROS2 sourced terminal:
ros2 run olcmr_pkg map_saver 
  • To save the SLAM estimated trajectories as csv files, type in a ROS2 sourced terminal:
ros2 service call /data_saver/save_trajectories

Results and evaluation

Below videos show the real-time application of OLCMR on the Newer College dataset and on our custom countryside dataset

SLAM estimated trajectories and CPU/RAM usage monitoring data can be found in the results folder. See this folder's README file for data plotting manual.

Using your own dataset or running the architecture on your robot

All the launch parameters are located in the template.yaml file.
You can copy this file and modify the parameters to run the architecture on a custom dataset or directly on board your robot, see the parameters README file for parameters description.
You should also edit the olcmr.launch.py file to use your config file.

You will need at least a LiDAR and a mono or RGB camera. For the architecture to function optimally, multiple RGB cameras with overlapping FOV, an IMU and a wheeled or legged odometry source should also be used. You can also use any other localisation source as the SLAM prior.
Camera calibration should be performed using Kalibr.

Publication

If you are using this code, please cite:

@InProceedings{9981137,
 author={Serdel, Quentin and Grand, Christophe and Marzat, Julien and Moras, Julien},
 booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
 title={Online Localisation and Colored Mesh Reconstruction Architecture for 3D Visual Feedback in Robotic Exploration Missions}, 
 year={2022},
 volume={},
 number={},
 pages={8690-8697},
 doi={10.1109/IROS47612.2022.9981137}}
}

About

Online Localisation and Colored Mesh Reconstruction Architecture for 3D Visual Feedback in Robotic Exploration Missions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published