Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split image-level processing from cross-image processing #8

Open
piercus opened this issue Jul 24, 2024 · 1 comment
Open

Split image-level processing from cross-image processing #8

piercus opened this issue Jul 24, 2024 · 1 comment

Comments

@piercus
Copy link

piercus commented Jul 24, 2024

Hello @lartpang, thanks for this great lib.

I would like to be able, on a big dataset, to store image-level metrics.

Context

For example

Image1 :

  • WeightedFmeasure : 92%
  • Emeasure : 94%
  • tags: foo, bar

Image2 :

  • WeightedFmeasure : 93%
  • Emeasure : 90%
  • tags: bar

Image3 :

  • WeightedFmeasure : 88%
  • Emeasure : 97%
  • tags : foo

And then to be able to run the evaluation on different tags (foo/bar) (without re-running the image-level metrics computation)

Actual

In the current lib it is not direct to do this, because the image-level processing, and the cross-images processing are made together, and there is no cross-metric convention.

Suggestion

In metric.step(pred, gt)

  • return the image-level value of the metric (could be an array for dynamic results)
  • Run in 2 different steps
    • Compute the metric metric.compute(pred, gt) -> value
    • Store the value internally metric.load(pred, gt)

As a result

  • We can store image-level metrics when running metric.step(pred, gt)
  • We can reuse pre-stored metrics using metric.load(value) afterward, before running metric.get_results()

Would you be interested to change the API for this ? Would you like some help ?

@piercus piercus changed the title add a load() step() Split image-level processing from cross-image processing Jul 24, 2024
@lartpang
Copy link
Owner

lartpang commented Jul 25, 2024

@piercus

Thanks for the idea, maybe the following extension code is what you want?

This code below is used to evaluate the algorithms for the video tasks.

https://github.com/lartpang/PySODEvalToolkit/blob/f12dcf5925750c9ea73c535ee8d99ed40cf1cb4c/utils/recorders/metric_recorder.py#L201-L279

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants