-
Notifications
You must be signed in to change notification settings - Fork 799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running only using pointclouds #49
Comments
Sorry I missed that question, better now than never... Yes RTAB-Map depends on visual features for loop closure detection. If you add a camera to your setup it is possible to use these kinds of lidar. cheers |
@matlabbe, how would you use RTAB-Map+camera in this setup? Only to get localization data to register lidar point clouds to get 3D model? Or would you use camera+RTAB-Map to get 3D model itself from the visual data (then it seems like lidar would not be needed)? What are the (dis)advantages of these methods? Thank you. |
Hi, Here are some combinations:
With a single RGB camera and a 3D lidar, it would be possible using the lidar data to "fake" a depth image registered with the RGB camera. This would give the setup RGB-D+lidar above. cheers, |
Merci Mathieu! Super votre travail à Sherbrooke 😄 |
Bienvenu! |
Thank you Mathieu, I'll try to use a RTAB-Map + stereo camera (ZED) + lidar (VLP-16). If you have any pointers on how to combine ldiar data with stereo camera's data, please let me know. |
Quick question, would it be possible to run rtabmap solely using point clouds generated from laser rangefinders (e.g. 2D Hokuyo lidar mounted on a slip ring or the new Velodyne vlp-16), admitting a few modifications.
Or is not possible because you depend on SURF features (or the like) for operation?
Thanks!
The text was updated successfully, but these errors were encountered: