-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missmatch on the number of point preds and gt #29
Comments
As explained in the Readme:
You need to pass the corresponding config files, in your case, Doing that, the printed result should also match the result from the paper. Note that slight variations are possible depending on the version of KISS-ICP that you use. To fully reproduce the results, you have to checkout at the tag |
Did you check out at the tag Note that the results differ slightly because the KISS poses differ in version 0.4.0. This was fixed in version 1.0.0 by PRBonn/kiss-icp#300, therefore I recommend using the latest MapMOS version unless you want to be close to the paper's results. |
I am using the latest version of MapMOS cuase I agree that I should use the latest MapMOS. I didn't try to 100% reproduce the figures in the paper. I understand there can be variations but I got the iou of 77.3% which was too different even considering updated components and I wondered which part could go wrong. So, I used the latest version, which should produce similar (if not better) iou, right? Are there missing parts in the command? mapmos_pipeline --dataloader kitti --sequence 08 --config /workspaces/MapMOS/config/kitti.yaml --save_kitti /workspaces/MapMOS/pretrained_ckpt/mapmos.ckpt /home/datasets/semantic_kitti/data |
Your command looks fine! I get the same results as you when running the latest MapMOS version. I had a quick look at the different qualitative results. I saw that at the beginning of the sequence, MapMOS at the tag |
Thanks for your explanation and re-check. I understand now because I also noticed that SemanticKITTI marks waiting cars as moving. Closing this issue :) |
Hi,
thank you for the open source code.
I ran your application and saved the predictions as introduced in the ReadMe with
but I noticed that the dimension of outputs didn't match up with ground truth. For example, the prediction 000001.label was in shape of (119219,), while the gt 000001.label with (123433, ). Similarly, prediction 000000.label with (119106,) while gt 000000.label with (123389,). Can you help introduce the missing steps here?
One more thing is, the printed result (Moving IoU) was 77.285%, which seems also not to match the validation result in the paper. Did I miss something here?
Thank you!
Best,
Zinuo
The text was updated successfully, but these errors were encountered: