Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missmatch on the number of point preds and gt #29

Closed
Zero-Yi opened this issue Jan 11, 2025 · 6 comments
Closed

Missmatch on the number of point preds and gt #29

Zero-Yi opened this issue Jan 11, 2025 · 6 comments

Comments

@Zero-Yi
Copy link

Zero-Yi commented Jan 11, 2025

Hi,

thank you for the open source code.

I ran your application and saved the predictions as introduced in the ReadMe with

mapmos_pipeline --dataloader kitti --sequence 08 --save_kitti /workspaces/MapMOS/pretrained_ckpt/mapmos.ckpt /home/datasets/semantic_kitti/data

but I noticed that the dimension of outputs didn't match up with ground truth. For example, the prediction 000001.label was in shape of (119219,), while the gt 000001.label with (123433, ). Similarly, prediction 000000.label with (119106,) while gt 000000.label with (123389,). Can you help introduce the missing steps here?

One more thing is, the printed result (Moving IoU) was 77.285%, which seems also not to match the validation result in the paper. Did I miss something here?

Thank you!

Best,
Zinuo

@benemer
Copy link
Member

benemer commented Jan 11, 2025

As explained in the Readme:

Want to reproduce the results from the paper?
For reproducing the results of the paper, you need to pass the corresponding config file. They will make sure that the de-skewing option and the maximum range are set properly.

You need to pass the corresponding config files, in your case, kitti.yaml. Otherwise, the point cloud will be pre-processed and clipped, causing a mismatch in the number of points.

Doing that, the printed result should also match the result from the paper. Note that slight variations are possible depending on the version of KISS-ICP that you use. To fully reproduce the results, you have to checkout at the tag mersch2023ral.

@Zero-Yi
Copy link
Author

Zero-Yi commented Jan 11, 2025

I added the config as:

mapmos_pipeline --dataloader kitti --sequence 08 --config /workspaces/MapMOS/config/kitti.yaml --save_kitti /workspaces/MapMOS/pretrained_ckpt/mapmos.ckpt /home/datasets/semantic_kitti/data

Now the numbers match with each other, but I still only got a moving IoU with 77.351%.
image
Do I still miss anything here?

@benemer
Copy link
Member

benemer commented Jan 12, 2025

Did you check out at the tag mersch2023ral we used for the paper experiments? That version uses kiss-icp==0.4.0. I just ran it on sequence 8 again and got these results, which are very close to the one from the paper:

image

Note that the results differ slightly because the KISS poses differ in version 0.4.0. This was fixed in version 1.0.0 by PRBonn/kiss-icp#300, therefore I recommend using the latest MapMOS version unless you want to be close to the paper's results.

@Zero-Yi
Copy link
Author

Zero-Yi commented Jan 12, 2025

I am using the latest version of MapMOS cuase I agree that I should use the latest MapMOS. I didn't try to 100% reproduce the figures in the paper. I understand there can be variations but I got the iou of 77.3% which was too different even considering updated components and I wondered which part could go wrong.

So, I used the latest version, which should produce similar (if not better) iou, right? Are there missing parts in the command?

mapmos_pipeline --dataloader kitti --sequence 08 --config /workspaces/MapMOS/config/kitti.yaml --save_kitti /workspaces/MapMOS/pretrained_ckpt/mapmos.ckpt /home/datasets/semantic_kitti/data

@benemer
Copy link
Member

benemer commented Jan 12, 2025

Your command looks fine! I get the same results as you when running the latest MapMOS version.

I had a quick look at the different qualitative results. I saw that at the beginning of the sequence, MapMOS at the tag mersch2023ral predicts the car on the right of the ego-vehicle to be moving, whereas the latest MapMOS does not. The car is moving very slowly and finally comes to a complete stop, so it's debatable whether it's moving. The SemanticKITTI labels consider an object to be moving if it moved throughout the sequence. Therefore, this car has the GT label of "moving." This results in a larger IoU for the mersch2023ral, even though the vehicle stops moving at some point. Since there are not so many moving objects in the KITTI data, one missed object with many points can heavily influence the average IoU.

@Zero-Yi
Copy link
Author

Zero-Yi commented Jan 12, 2025

Thanks for your explanation and re-check. I understand now because I also noticed that SemanticKITTI marks waiting cars as moving.

Closing this issue :)

@Zero-Yi Zero-Yi closed this as completed Jan 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants