Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error using Monocular Visual Odometry and other questions #342

Closed
ArthurLovekinGecko opened this issue Jun 12, 2023 · 3 comments
Closed
Labels
question Theory or implementation question

Comments

@ArthurLovekinGecko
Copy link

Error using Monocular Visual Odometry:
When I attempt to run Monocular config files (eg. using either of the commands below), I immediately get the [ERROR] [run_subscribe_msckf-1]: process has died. How can I fix this in order to run monocular Visual Odometry?

ros2 launch ov_msckf subscribe.launch.py config:=rs_d455
ros2 launch ov_msckf subscribe.launch.py config:=rpng_plane

Other (unrelated) Questions:

  1. When I initialize (with the camera stationary) I get the message below until I move my camera up and down. Is this standard behavior for OpenVINS? For other Visual Odometry Algorithms?
    [init]: disparity is 0.283,0.302 (10.00 thresh)
    [run_subscribe_msckf-1] [init]: failed static init: no accel jerk detected
  2. What do the colors of the detected features mean in the ov_msckf/loop_depth_colored visualization?
  3. Is it possible to run this algorithm without the IMU (eg. is there an easy way to turn off the imu in the config file)? What would the performance look like if I did? (I recognize that a Visual Inertial Navigation System might not work well without IMU, but for my current project it would be helpful to explore this option). If it is not possible with this library, which algorithm would you recommend looking into (ov_plane, VINS-Fusion, ORBSLAM3, etc.)?
  4. I would like to explore dynamically masking the environment (something along the lines of what is done in these papers for MaskSLAM and DynaSLAM). It looks like masking is supported in OpenVINS and I see the toggle for this in the config files of rpng_ironsides. However it is unclear how I can actually use this functionality (especially for the dynamic masking). To which part of the code-base should I look to get dynamic masking working?
@goldbattle goldbattle added the question Theory or implementation question label Jun 17, 2023
@goldbattle
Copy link
Member

I get the following errors about cam1 calibration:
image

You should run the following since those datasets are monocular camera only (one camera).

ros2 launch ov_msckf subscribe.launch.py config:=rs_d455 max_cameras:=1
ros2 launch ov_msckf subscribe.launch.py config:=rpng_plane max_cameras:=1

For a monocular camera, it is unable to run unless there is movement, thus to initialize the platform needs to be stationary and then moving. We detect this based on both IMU threshold and disparity (see the config file comments).

The colors are the depth of features, you can inspect the code here:

// Taken from LSD-SLAM codebase segment into 0-4 meter segments:
// https://github.com/tum-vision/lsd_slam/blob/d1e6f0e1a027889985d2e6b4c0fe7a90b0c75067/lsd_slam_core/src/util/globalFuncs.cpp#L87-L96
float id = 1.0f / (float)uvd(2);
float r = (0.0f - id) * 255 / 1.0f;
if (r < 0)
r = -r;
float g = (1.0f - id) * 255 / 1.0f;
if (g < 0)
g = -g;
float b = (2.0f - id) * 255 / 1.0f;
if (b < 0)
b = -b;
uchar rc = r < 0 ? 0 : (r > 255 ? 255 : r);
uchar gc = g < 0 ? 0 : (g > 255 ? 255 : g);
uchar bc = b < 0 ? 0 : (b > 255 ? 255 : b);
cv::Scalar color(255 - rc, 255 - gc, 255 - bc);

The system is built around using an IMU sensor at its core. Thus support for IMU-free isn't something we will ever support. I recommend checking out the ORB-SLAM family if you want monocular visual odometry (VO).

We support static matching, checkout the tum_vi config file, but to support dynamic initialization you would need to either call a network to get that segmentation, or change the ROS subscriptions to subscribe to a topic which has that masking. The key place to change would be here:

// Load the mask if we are using it, else it is empty
// TODO: in the future we should get this from external pixel segmentation
if (_app->get_params().use_mask) {
message.masks.push_back(_app->get_params().masks.at(cam_id0));
message.masks.push_back(_app->get_params().masks.at(cam_id1));
} else {
// message.masks.push_back(cv::Mat(cv_ptr0->image.rows, cv_ptr0->image.cols, CV_8UC1, cv::Scalar(255)));
message.masks.push_back(cv::Mat::zeros(cv_ptr0->image.rows, cv_ptr0->image.cols, CV_8UC1));
message.masks.push_back(cv::Mat::zeros(cv_ptr1->image.rows, cv_ptr1->image.cols, CV_8UC1));
}

You can imaging either having the mask as another sync'ed input here, or just directly calling a network and getting a mask. Hope this helps, good luck!

@Aishkrish18
Copy link

Aishkrish18 commented Jun 23, 2023

I have a short question. I am running d455 using Open vins in ros1 noetic and after this output shown below I don't see any tracking at all. I am not quite sure where I went wrong. I disabled stereo mode everywhere and changed to mono with 1 camera. I am trying to run the estimation live with my d455 camera .
Could someone help me out here?
Screenshot from 2023-06-23 12-29-19

@goldbattle
Copy link
Member

HI. @Aishkrish18 I recommend opening a new issue if you were unable to solve it. I recommend inspecting the published topics and enabling higher verbosity to see if the system is properly getting data. Likely, the published topics are not correct with your sensor launch file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Theory or implementation question
Projects
None yet
Development

No branches or pull requests

3 participants