-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about validation performance #2
Comments
Hi, @jokester-zzz , thanks for your attention. Did you test the generated SSC results by official SemanticKITTI ops? Actually, in the SSC data set, there are many voxels see as invalid (i.e., they cannot be collected by any LiDAR frames), thus these voxels may be seen strange but will not be considered in evaluation of benchmark. You can test with quantitative results to check whether model perform normally. |
You can use official ops and run |
Thanks for your reply! I think the results do not seem right... |
@jokester-zzz Hi, I double-checked the model we released on validation set, it can obtain the following results without voting: I think your problem may be caused by:
|
Hi, @yanx27 . |
@jokester-zzz Congratulations! |
Hi, @yanx27 I just met the same problem as @jokester-zzz said |
Thanks for the open-source code.
I followed your guide and use
test_kitti_ssc.py
to run on sq08 with the pretrained model you provided. But I encountered some problems. Here is a visualization of the predictions from sq08.The visualization results look a little strange, and I don’t know what was wrong.
And when I try to infer on the test set, the code reports an error. It seems that it wants to read the label files, which do not exist for the test set. After I tried to fix these problems, the results for the test set were still strange. I wonder if I had made some mistakes.
Looking forward to your reply. Best wishes!
The text was updated successfully, but these errors were encountered: