Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion about the result differences when retraining 3DSSD #659

Closed
zhanghm1995 opened this issue Jun 18, 2021 · 3 comments
Closed

Confusion about the result differences when retraining 3DSSD #659

zhanghm1995 opened this issue Jun 18, 2021 · 3 comments

Comments

@zhanghm1995
Copy link

zhanghm1995 commented Jun 18, 2021

Thanks for sharing this wonderful project.

These days I want to train the 3DSSD network by myself. I use the latest code where config file is 3dssd_4x4_kitti-3d-car.py from master branch. And I didn't change anything, but after training, I got a very different evalution results in KITTI val dataset.

My results:

Car AP@0.70, 0.70, 0.70:
bbox AP:90.3109, 80.9904, 80.1693
bev  AP:89.1738, 81.4479, 79.0906
3d   AP:85.3059, 69.3424, 68.5501
aos  AP:90.30, 80.94, 80.06
Car AP@0.70, 0.50, 0.50:
bbox AP:90.3109, 80.9904, 80.1693
bev  AP:90.3775, 88.9195, 88.7843
3d   AP:90.3421, 87.6855, 86.7337
aos  AP:90.30, 80.94, 80.06

While evaluation results of the pretrained model from this link is:

Car AP@0.70, 0.70, 0.70:
bbox AP:95.0919, 89.9625, 89.3045
bev  AP:90.4323, 88.1866, 86.2247
3d   AP:88.6645, 78.4028, 77.1643
aos  AP:95.06, 89.89, 89.16
Car AP@0.70, 0.50, 0.50:
bbox AP:95.0919, 89.9625, 89.3045
bev  AP:95.1344, 90.1295, 89.6641
3d   AP:95.0992, 90.0795, 89.5649
aos  AP:95.06, 89.89, 89.16

From the above results, I notice there are fairly large performance gap. I don't know what the reasons.

My training environment is 4 Nvidia V100, the batch size is also 4.

BTW, when I visualize the loss log results, I found the centerness_loss has a large fluctuation, I'm not sure whether it's one of factors causing this performance differences.

@xiliu8006 xiliu8006 pinned this issue Jun 21, 2021
@xiliu8006
Copy link
Contributor

We fixed a bug in 3dssd,you can refer to this PR. So you may need to train a new model.

@zhanghm1995
Copy link
Author

zhanghm1995 commented Jun 22, 2021

@xiliu8006 That's strange, I used the almost latest code pulled on 15th June, while this PR you mentioned was merged more than 20 days ago.

Whatever, I will retrain 3DSSD to see whether I could get the same conclusion.

@Tai-Wang Tai-Wang unpinned this issue Jun 23, 2021
@Wuziyi616
Copy link
Contributor

@zhanghming any updates here? Do you still have such problem or can we close this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants