Lidar-Monocular Visual Odometry. This library is designed to be an open platform for visual odometry algortihm development. We focus explicitely on the simple integration of the following key methodologies:
- Keyframe selection
- Landmark selection
- Prior estimation
- Depth integration from different sensors.
The core library keyframe_bundle_adjustment is a backend that should faciliate to swap these modules and easily develop those algorithms.
-
It is supposed to be an add-on module to do temporal inference of the optimization graph in order to smooth the result
-
In order to do that online a windowed approach is used
-
Keyframes are instances in time which are used for the bundle adjustment, one keyframe may have several cameras (and therefore images) associated with it
-
The selection of Keyframes tries to reduce the amount of redundant information while extending the time span covered by the optimization window to reduce drift
-
Methodologies for Keyframe selection:
- Difference in time
- Difference in motion
-
We use this library for combining Lidar with monocular vision.
This is work in progress, detailed install instructions and examples will follow. Here in this repo we use a version with built in prior estimation since it is faster and less modules are required, however results are slightly worse than in kitti version up to now.
In any case:
- ceres: follow the instructions on http://ceres-solver.org/installation.html
- png++:
shell sudo apt-get install libpng++-dev
- install ros https://wiki.ros.org/kinetic/Installation
- install catkin_tools:
shell sudo apt-get install python-catkin-tools
- install opencv_apps:
shell sudo apt-get install ros-kinetic-opencv-apps
-
initiate a catkin workspace:
cd *your_catkin_workspace* catkin init
-
clone limo into src of workspace:
cd *your_catkin_workspace*/src git clone https://github.com/johannes-graeter/limo.git
-
clone dependencies and build repos
cd *your_catkin_workspace*/src/limo bash install_repos.sh
-
unittests:
cd *your_catkin_workspace* catkin run_tests --profile limo_release
-
get test data from https://www.mrt.kit.edu/graeterweb/04.bag
- this is a bag file generated from Kitti sequence 04 with added semantic labels.
- there is more under the same address all named ??.bag (supported: 00.bag, 04.bag)
-
in different terminals
roscore
rosbag play 04.bag -r 0.1 --pause --clock
-
source *your_catkin_workspace*/devel/setup.sh roslaunch demo_keyframe_bundle_adjustment_meta kitti_standalone.launch
- unpause rosbag (hit space in terminal)
rviz *your_catkin_workspace*/src/demo_keyframe_bundle_adjustment_meta/res/default.rviz
-
watch limo trace the trajectory in rviz :)
- runtime is ok for individual modules, however communication between nodes must be enhanced to ensure online usage (nodelets...).
- add and try rocc landmark selection
- for less packages and better runtime we do not use external priors from liviodo as in Kitti, but internal priors and one motion only-adjustment in between keyframes. However results are slightly worse than on Kitti. Tune so that is not the case.
If you just want to give it a quick peek, I prepared a ready-to-use virtualbox image (packed with Ubuntu 16.04.04, ros kinetic and all dependencies for limo).
- download it from https://www.mrt.kit.edu/graeterweb/limo_full.ova.
- Password for the vm-image is "1234".
- Find all modules in ~/workspaces/limo/ .
- Run example (~/04.bag) as described above.
- Note that the runtime in the virtual machine is slower than on a normal system.