Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OdometryMono: localize in a pre-loaded local map created by RGB-D mapping #27

Closed
matlabbe opened this issue Jun 13, 2015 · 2 comments
Closed

Comments

@matlabbe
Copy link
Member

matlabbe commented Jun 13, 2015

The goal is the create a 3D map with a Kinect, then use it with only a webcam to track the camera. Idea from this post: http://official-rtab-map-forum.206.s1.nabble.com/RGBD-SLAM-and-then-RGB-Localization-td489.html

@matlabbe
Copy link
Member Author

New features added in the past weeks make it possible to do that. Finding the good parameters is still needed but it works. From the default RTAB-Map's parameters:

  1. Create a database for the odometry. We first map an area with the Kinect as usual. However, Odometry and Vocabulary must have the same type of words. See "Preferences (Advanced) -> RTAB-Map Settings -> Visual Word -> Visual word type" for the vocabulary and "Preferences (Advanced) -> RTAB-Map Settings -> Odometry -> Feature detector" for odometry. Here we used ORB for both. Under Odometry panel in "Motion estimation" box, select "3D to 2D PnP" for the motion estimation and set 1 pixel reprojection error for PnP. Start mapping a small area. When finished, close the database to save it on the hard drive.
  2. Setup the database for odometry. Under Preferences->Odometry->BOW panel, set the path of the saved database in the "Path to a fixed map" field.
  3. Select an RGB-only camera. Go to Source panel in Preferences and select an RGB camera (default usb webcam). Calibrate the camera by clicking "Calibrate" button. Save the calibration files with a discriminative name like "my_webcam". Then back to Preferences->Source panel, set the calibration name to "my_webcam". You are now ready for "mapping" (only features will be shown).
  4. Start mapping. The camera should be looking the same stuff than the mapped area to be able to compute odometry. You will then be tracking the RGB-only camera position.

Parameters tuning:

  • Vocabulary/Odometry visual word type
  • Number of features to add in the vocabulary and/or to use for odometry
  • PnP parameters

Another thing you can do is to use Localization mode with a copy of the database used for odometry, so the 3D map will be shown as the camera is tracked. Don't forget to set PnP for loop closure constraint estimation. Then open the database, download all clouds and start mapping. On loop closure, the 3D map should re-appear.

@matlabbe
Copy link
Member Author

matlabbe commented Nov 27, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant