Skip to content

zhengjs/limo

 
 

Repository files navigation

limo

Lidar-Monocular Visual Odometry. This library is designed to be an open platform for visual odometry algortihm development. We focus explicitely on the simple integration of the following key methodologies:

  • Keyframe selection
  • Landmark selection
  • Prior estimation
  • Depth integration from different sensors.

The core library keyframe_bundle_adjustment is a backend that should faciliate to swap these modules and easily develop those algorithms.

  • It is supposed to be an add-on module to do temporal inference of the optimization graph in order to smooth the result

  • In order to do that online a windowed approach is used

  • Keyframes are instances in time which are used for the bundle adjustment, one keyframe may have several cameras (and therefore images) associated with it

  • The selection of Keyframes tries to reduce the amount of redundant information while extending the time span covered by the optimization window to reduce drift

  • Methodologies for Keyframe selection:

    • Difference in time
    • Difference in motion
  • We use this library for combining Lidar with monocular vision.

Note

This is work in progress, detailed install instructions and examples will follow. Here in this repo we use a version with built in prior estimation since it is faster and less modules are required, however results are slightly worse than in kitti version up to now.

Installation

Requirements

In any case:

Build

  • initiate a catkin workspace:

    cd *your_catkin_workspace*
    catkin init
  • clone limo into src of workspace:

    cd *your_catkin_workspace*/src
    git clone https://github.com/johannes-graeter/limo.git
  • clone dependencies and build repos

    cd *your_catkin_workspace*/src/limo
    bash install_repos.sh
  • unittests:

    cd *your_catkin_workspace*
    catkin run_tests --profile limo_release

Run

  • get test data from https://www.mrt.kit.edu/graeterweb/04.bag

    1. this is a bag file generated from Kitti sequence 04 with added semantic labels.
    2. there is more under the same address all named ??.bag (supported: 00.bag, 04.bag)
  • in different terminals

    1. roscore
    2. rosbag play 04.bag -r 0.1 --pause --clock
    3. source *your_catkin_workspace*/devel/setup.sh
      roslaunch demo_keyframe_bundle_adjustment_meta kitti_standalone.launch
    4. unpause rosbag (hit space in terminal)
    5. rviz *your_catkin_workspace*/src/demo_keyframe_bundle_adjustment_meta/res/default.rviz
  • watch limo trace the trajectory in rviz :)

Todo

  • runtime is ok for individual modules, however communication between nodes must be enhanced to ensure online usage (nodelets...).
  • add and try rocc landmark selection
  • for less packages and better runtime we do not use external priors from liviodo as in Kitti, but internal priors and one motion only-adjustment in between keyframes. However results are slightly worse than on Kitti. Tune so that is not the case.

Try it out

If you just want to give it a quick peek, I prepared a ready-to-use virtualbox image (packed with Ubuntu 16.04.04, ros kinetic and all dependencies for limo).

  • download it from https://www.mrt.kit.edu/graeterweb/limo_full.ova.
  • Password for the vm-image is "1234".
  • Find all modules in ~/workspaces/limo/ .
  • Run example (~/04.bag) as described above.
  • Note that the runtime in the virtual machine is slower than on a normal system.

About

Lidar-Monocular Visual Odometry

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 88.9%
  • Python 5.9%
  • CMake 4.0%
  • Shell 1.2%