Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce the result #3

Open
panxkun opened this issue Jul 19, 2024 · 1 comment
Open

Reproduce the result #3

panxkun opened this issue Jul 19, 2024 · 1 comment

Comments

@panxkun
Copy link

panxkun commented Jul 19, 2024

Great work!

I have some questions about the experiments in the paper.

  1. For the experiments on the EuRoC dataset, the results are evaluated using ATE, indicating a real-scale evaluation. For VINS and ORB-SLAM3, they can obtain real scale from the IMU, but where do Our and CCM-SLAM obtain the real scale from?
  2. I have configured the code and found that the results for the four scenarios in the EuRoC dataset are significantly lower than those in the paper. I used the default conda environment and code configurations. What could be the possible reasons for this inconsistency?
Scene Paper Reproduce
MH01-03 0.022 0.045
MH01-05 0.036 0.059
V101-103 0.031 0.081
V201-203 0.024 0.072

In Paper
Paper

I reproduce
Reproduced

@lahavlipson
Copy link
Collaborator

Hi @panxkun, I totally missed your issue, sorry for the extremely late reply!

  1. During evaluation, we align the predicted camera poses to the ground truth by computing a single 7-DOF transformation for the entire sequence group, similar to typical mono-visual SLAM evaluation. Same story for other mono-visual SLAM baselines like ORB-SLAM3 and CCM-SLAM. See sec. 4.2

  2. If I clone a fresh copy of the repo and datasets, I get Vicon 1: 0.032 / Vicon 2: 0.021 / Machine-Hall: 0.038 / Machine-Hall0-3 0.021, which is roughly what is reported in the paper (see logs below). I do often see non-deterministic behavior on each sequence individually. For example, on Vicon 2, running 5 trials I get: 0.021m, 0.023m, 0.024m. 0.041m, 0.070m

Anecdotally, I've found that changing the bundle adjustment data type to 64-bit can help. You can do this in block_e.cuh by changing mtype to double and mdtype to torch::kFloat64.

Logs (excluding tqdm output)

$ python eval_euroc.py 'Vicon 1'
#Inliers=[ 0  1  0  1  1 27]
Finished connecting disjoint trajectories
#Inliers=[25 11 33  4 34 23]
Finished connecting disjoint trajectories
Saved our_predictions/Vicon_1/Estimated_vs_Ground_Truth.pdf
Vicon 1 Err (t): 0.032m | Recall [(0.01, 10002.0, 13.713), (0.025, 10003.0, 46.263), (0.05, 10005.0, 96.534), (0.1, 10010.0, 99.548)] | 0:08:58.486870

$ python eval_euroc.py 'Vicon 2'
#Inliers=[20 24 19 14 24 25]
Finished connecting disjoint trajectories
#Inliers=[ 6  8  5  5  7 32]
Finished connecting disjoint trajectories
Saved our_predictions/Vicon_2/Estimated_vs_Ground_Truth.pdf
Vicon 2 Err (t): 0.021m | Recall [(0.01, 10002.0, 24.014), (0.025, 10003.0, 86.984), (0.05, 10005.0, 96.707), (0.1, 10010.0, 99.907)] | 0:09:01.162620

$ python eval_euroc.py 'Machine Hall0-3'
#Inliers=[30 38 19 35 38 21]
Finished connecting disjoint trajectories
#Inliers=[ 9 21 25 17 18  5]
Finished connecting disjoint trajectories
#Inliers=[ 0  0 23 25 33 19]
Finished connecting disjoint trajectories
#Inliers=[ 2  7  4 30 18 36]
Finished connecting disjoint trajectories
Saved our_predictions/Machine_Hall/Estimated_vs_Ground_Truth.pdf
Machine Hall Err (t): 0.038m | Recall [(0.01, 10002.0, 5.11), (0.025, 10003.0, 58.229), (0.05, 10005.0, 88.963), (0.1, 10010.0, 96.866)] | 0:18:44.786021

$ python eval_euroc.py 'Machine Hall'
#Inliers=[30 38 19 35 38 21]
Finished connecting disjoint trajectories
s]#Inliers=[ 9 21 25 17 18  5]
Finished connecting disjoint trajectories
Saved our_predictions/Machine_Hall0-3/Estimated_vs_Ground_Truth.pdf
Machine Hall0-3 Err (t): 0.021m | Recall [(0.01, 10002.0, 34.722), (0.025, 10003.0, 80.449), (0.05, 10005.0, 97.972), (0.1, 10010.0, 100.0)] | 0:12:40.779849

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants