Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Carla metrics bug fix #1883

Merged
merged 9 commits into from
Feb 6, 2023
Merged

Conversation

swsuggs
Copy link
Contributor

@swsuggs swsuggs commented Feb 3, 2023

Mike's description of the error from slack:

it seems that per example metrics are always collected for OD scenarios even if metric -> record_metric_per_sample is False. In addition, with the exception of AP_per_class, the other metrics no longer report the mean statistic even if metric -> mean is True.

This is essentially because the GlobalMeter class used for @populationwise metric functions assumes that the functions already return the mean over all samples. This is true for AP_per_class, but not the others.

Easy fix: just have the metric functions return the mean over all samples. This seems to be what the writer of this code (David) expected.

But that leaves questions about why the carla OD functions are marked @populationwise in the first place and whether a user would ever want to see per-image results. In that case, should this be controllable from the config, or should we just return both?

@swsuggs
Copy link
Contributor Author

swsuggs commented Feb 3, 2023

The present change simply returns both mean and per-image. @lcadalzo Please weigh in, if you'd like, on what the behavior of this should be.

@lcadalzo
Copy link
Contributor

lcadalzo commented Feb 3, 2023

I believe the only OD metrics that need be populationwise are the AP ones, since mAP must receive all labels/predictions as input at once at the end of the scenario. It seems like a very messy and confusing workaround to add a new set of mean and per_image params as metric kwargs.

The cleanest solution, unless there's something here I'm missing, would be as follows: treat the non-AP metrics like we do other "normal" ones (e.g. for image classification), i.e. decorate them as @batchwise. Let the record_metric_per_sample and means params in the metric config determine what gets outputted, as was designed and as occurs for other metrics

@swsuggs swsuggs requested a review from lcadalzo February 6, 2023 14:32
@swsuggs swsuggs changed the title WIP Carla metrics bug fix Carla metrics bug fix Feb 6, 2023
Copy link
Contributor

@lcadalzo lcadalzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is working as expected for me now

@lcadalzo lcadalzo merged commit a3aa999 into twosixlabs:develop Feb 6, 2023
@swsuggs swsuggs deleted the carla-metrics-bug-fix branch February 6, 2023 15:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants