Skip to content
This repository has been archived by the owner on Jul 31, 2024. It is now read-only.

How to evaluate on Nuscenes test set? #399

Closed
think-twice-1218 opened this issue Apr 26, 2023 · 4 comments
Closed

How to evaluate on Nuscenes test set? #399

think-twice-1218 opened this issue Apr 26, 2023 · 4 comments

Comments

@think-twice-1218
Copy link

Thank you for sharing your great idea!
We are trying to evaluate it on the test set, but there are some issues. We noticed that in the previous issues, the author mentioned to remove everything related to GT
#347.
Is it possible to share the details? Looking forward to your response.
3

捕获1

捕获

@GerhardArya
Copy link

GerhardArya commented Apr 26, 2023

I'm also facing the exact same problem. Just dropping a comment here so hopefully I'll be aware if a solution is ever posted here.

Right now the default configs in BEVFusion seems to run evaluation on the val dataset that is also used in during the training process. Based on the code in the nuscenes converter, any nuscenes test set generated by it will not have num_lidar_points because the insertion of num_lidar_points and several other attributes are conditioned on "if not test".

I'm working on a custom dataset but that dataset is converted into a format and evaluated in a manner that is basically what BEVFusion/nuScenes. So, my test set also has no annotations and none of the attributes conditioned on "if not test", including num_lidar_points, leading to an error. If I try evaluating on val set like the nuscenes config, my evaluation code also has no errors.

I could just change the converter algorithm a bit to include attributes like num_lidar_pts or insert them as empty lists or something but I'm not sure if the existing algorithm was intentional.

@GerhardArya
Copy link

GerhardArya commented Apr 26, 2023

@think-twice-1218 I found the solution here: #233 (comment)

I tried the changes mentioned there but changed it a bit in a way that doesn't require me to keep changing stuff when I want to train or evaluate. It works with my custom test set now. If this works for you and nuScenes test set, I think this can be closed.

@kentang-mit
Copy link
Contributor

Hi folks,

Sorry for the delayed response because I was working on different projects recently. I think the solution mentioned by @GerhardArya make sense for me. Make sure to include both train+val data during training if you want to make a test server submission; otherwise you might not be able to reproduce our results in the paper. I also mentioned here on how to get the json predictions.

Best,
Haotian

@vtghsr
Copy link

vtghsr commented Dec 31, 2023

I'm also facing the exact same problem. Just dropping a comment here so hopefully I'll be aware if a solution is ever posted here.

Right now the default configs in BEVFusion seems to run evaluation on the val dataset that is also used in during the training process. Based on the code in the nuscenes converter, any nuscenes test set generated by it will not have num_lidar_points because the insertion of num_lidar_points and several other attributes are conditioned on "if not test".

I'm working on a custom dataset but that dataset is converted into a format and evaluated in a manner that is basically what BEVFusion/nuScenes. So, my test set also has no annotations and none of the attributes conditioned on "if not test", including num_lidar_points, leading to an error. If I try evaluating on val set like the nuscenes config, my evaluation code also has no errors.

I could just change the converter algorithm a bit to include attributes like num_lidar_pts or insert them as empty lists or something but I'm not sure if the existing algorithm was intentional.

Hi, your solution works for me. But I have another error now, did you have the same problem while evaluating on test set?
image

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants