Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The reproduced results on the Waymo dataset #20

Open
WWW2323 opened this issue Dec 15, 2022 · 3 comments
Open

The reproduced results on the Waymo dataset #20

WWW2323 opened this issue Dec 15, 2022 · 3 comments

Comments

@WWW2323
Copy link

WWW2323 commented Dec 15, 2022

Hi, dear author, there is about 0.5~1.0 gap between the results I reproduced and the results the paper reports on waymo dataset, it's variance or there is something wrong with my reproducing? Do you pre-train Voxel-MAE with 100% waymo training data? I pre-train Voxel-MAE with this config for 30 epochs and use the 30th epoch to initialize CenterPoint. After fine-tuning for 30epoch, I get the following results:
image
And the results reported on paper is:
image

@chaytonmin
Copy link
Owner

Hi, dear author, there is about 0.5~1.0 gap between the results I reproduced and the results the paper reports on waymo dataset, it's variance or there is something wrong with my reproducing? Do you pre-train Voxel-MAE with 100% waymo training data? I pre-train Voxel-MAE with this config for 30 epochs and use the 30th epoch to initialize CenterPoint. After fine-tuning for 30epoch, I get the following results: image And the results reported on paper is: image

We pre-train Voxel-MAE with ~20% data of training samples as OpenPCDet.

@WWW2323
Copy link
Author

WWW2323 commented Dec 15, 2022

@chaytonmin Hi, dear author, thanks for your quick reply, and is my result normal? How many epochs you pre-trian Voxel-MAE? 3 or 20 or 30 epochs? The pre-train epoch in the code is 30, while the pre-train epoch mentioned in the paper is 3, is it a mistake?
3381671101036_ pic

@chaytonmin
Copy link
Owner

@chaytonmin Hi, dear author, thanks for your quick reply, and is my result normal? How many epochs you pre-trian Voxel-MAE? 3 or 20 or 30 epochs? The pre-train epoch in the code is 30, while the pre-train epoch mentioned in the paper is 3, is it a mistake? 3381671101036_ pic

The result is normal. The environmental changes can cause minor changes in results.
Voxel-MAE converges in 3 epochs on KITTI dataset. Therefore, we just pre-train for 3 epochs. With the limited time, we didn't do a lot of experiments. Maybe more epochs are more suitable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants