Visual odometry(VO) is the process of determining the position and orientation of a robot by analyzing the associated camera images. The project is designed to estimate the motion of calibrated camera mounted over a mobile platform. Motion is estimated by computing the feature points of the image and determining the relative translation and rotation of the images.
- Image sequences
- Feature Detection
- Feature Matching (or Tracking)
- Motion Estimation
- 2-D to 2-D
- 3-D to 3-D
- 3-D to 2-D
- 2D-2D Motion Estimation using Feature matching method.
- 2D-2D Motion Estimation using Optical Flow Method.
- 3D-2D Motion estimation using Optical Flow method.
-
- First image(I1) and Second image(I2) was Captured using calibrated monocular camera setup and Features were computed in both images using sift Feature Detector.
- Corresponding features were matched using FlannBasedMatcher/ brute force and accuracy was maintained using ratio test.
- Using matched features essential matrix for image pair I1, I2 was computed.
- Decompose an essential matrix into a rotation matrix and a translation vector upto scale ambiguity.
- Relative 3D Point cloud was computed by triangulating the image pair.
- Repeat the process and compute point cloud for the next corresponding image pair.
- Relative scale was computed by taking the mean of distances between the consecutive points clouds obtained from keypoint matching of subsequent images and rescale translation accordingly.
- Concatenate transformation and repeat the process.
- 2D-2D Feature Matching Approch code
-
reference paper :2D-2D Feature Matching
-
- slow computation due to feature matching and more computation complexity due to sift feature extraction method and slow brute force matching
-
- use feature tracking methods instead of feature matching and using fast feature extranction method which we used in the next Approch 2D-2D Motion Estimation(Feature tracking)
- 2D-2D Feature tracking Approch code
-
- First image(I1) and Second image(I2) was Captured and Features were computed in the First image using the fast feature detection method.
- Features of I1 were tracked in I2 using Lucas Kanade optical flow method.
- Calculate tracked features calculate essential, rotation matrix, translation matrix, and relative scale between images as explained above.
- Track features in the next frames and concatenates transformation.
- Update the reference frame when a sufficient number of features were not tracked and repeat the process.
- 2D-2D Feature tracking Approch code
-
- improvements in time and space complexity of the code with considerable improvement in frame processing rate.
-
- First image(I1) and Second image(I2) was Captured and Features were computed in the First image using the fast feature detection method.
- then using 2d to 2d approch rotation and translation is computed between first two images. then these points are triangulated to get 3d point clouds.
- untill the reprojection error between reprojected 3d point and current frame is greater than some threshold motion is estimated using EPNP.
- else the reference frame is changed from I1 to Ir where r < n(current frame age).
- this process is repeated.
- 3D-2D implementation code