Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about validation performance #2

Closed
jokester-zzz opened this issue Jan 14, 2021 · 7 comments
Closed

Question about validation performance #2

jokester-zzz opened this issue Jan 14, 2021 · 7 comments

Comments

@jokester-zzz
Copy link

Thanks for the open-source code.
I followed your guide and use test_kitti_ssc.py to run on sq08 with the pretrained model you provided. But I encountered some problems. Here is a visualization of the predictions from sq08.
image
The visualization results look a little strange, and I don’t know what was wrong.
And when I try to infer on the test set, the code reports an error. It seems that it wants to read the label files, which do not exist for the test set. After I tried to fix these problems, the results for the test set were still strange. I wonder if I had made some mistakes.
Looking forward to your reply. Best wishes!

@yanx27
Copy link
Owner

yanx27 commented Jan 15, 2021

Hi, @jokester-zzz , thanks for your attention. Did you test the generated SSC results by official SemanticKITTI ops? Actually, in the SSC data set, there are many voxels see as invalid (i.e., they cannot be collected by any LiDAR frames), thus these voxels may be seen strange but will not be considered in evaluation of benchmark. You can test with quantitative results to check whether model perform normally.

@yanx27
Copy link
Owner

yanx27 commented Jan 15, 2021

You can use official ops and run $ ./evaluate_completion.py --dataset /path/to/kitti/dataset/ --predictions /path/to/method_predictions --split train/valid/test # depending of desired split to evaluate :)

@jokester-zzz
Copy link
Author

Thanks for your reply!
So I did not submit the test set results to the website, but used evaluate_completion.py and evaluated the sq08. The results are as follows.
========================== RESULTS ==========================
Validation set:
IoU avg 0.054
IoU class 1 [car] = 0.021
IoU class 2 [bicycle] = 0.000
IoU class 3 [motorcycle] = 0.000
IoU class 4 [truck] = 0.030
IoU class 5 [other-vehicle] = 0.009
IoU class 6 [person] = 0.000
IoU class 7 [bicyclist] = 0.000
IoU class 8 [motorcyclist] = 0.000
IoU class 9 [road] = 0.024
IoU class 10 [parking] = 0.002
IoU class 11 [sidewalk] = 0.052
IoU class 12 [other-ground] = 0.000
IoU class 13 [building] = 0.216
IoU class 14 [fence] = 0.023
IoU class 15 [vegetation] = 0.293
IoU class 16 [trunk] = 0.105
IoU class 17 [terrain] = 0.059
IoU class 18 [pole] = 0.159
IoU class 19 [traffic-sign] = 0.028
Precision = 65.1
Recall = 53.41
IoU Cmpltn = 41.52
mIoU SSC = 5.38

I think the results do not seem right...

@yanx27
Copy link
Owner

yanx27 commented Jan 16, 2021

@jokester-zzz Hi, I double-checked the model we released on validation set, it can obtain the following results without voting:
image

I think your problem may be caused by:

  • There are some version gaps between our running environment and the model's results seems not right. Please check whether the segmentation task performs normally.
  • If the result of the segmentation task is right, I think this problem maybe caused by the difference of our SSC datasets. There is an old issue in semantickitti_api here, and their generated SSC data contains a wrong shift on upwards direction. If you carefully check our codes, you will find out that we add an additional shifting to align their old version dataset here. So I think your problem maybe caused by you use the new version SSC dataset, but our codes still give a shifting. You can just delete this line in codes to solve this problem.

@jokester-zzz
Copy link
Author

Hi, @yanx27 .
I have found my problem! I used spconv1.2.1, so there were some problems with voxelization. I‘ve just changed the spconv version to 1.0. The performance gets better.
Thank you for your patient reply, I think my problem is solved!

@yanx27
Copy link
Owner

yanx27 commented Jan 16, 2021

@jokester-zzz Congratulations!

@yanx27 yanx27 closed this as completed Jan 16, 2021
@helincao618
Copy link

helincao618 commented Sep 2, 2022

Hi, @yanx27 I just met the same problem as @jokester-zzz said
'when I try to infer on the test set, the code reports an error. It seems that it wants to read the label files, which do not exist for the test set.'
How could you solve it? Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants