-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JaccardIndex: in torchmetrics v0.9, "average" argument replaces "reduction" #805
Comments
@calebrob6 do you remember why we have custom code in |
If we do keep the code as is, we can check the version of torchmetrics being used and conditionally change the kwarg name. |
@adamjstewart We have to manually write metrics because we have our own custom loop over the dataset instead of using "metrics = trainer.test(module, datamodule)" which would allow us to use the CSVLogger callback. We should refactor to use pytorch lightning's trainer at some point. |
ETCI2021 is binary IIRC so shouldn't be handled the same as the multiclass cases (we want the Jaccard of the positive class not the average Jaccard of the negative class and positive class). |
Ah, so this relates to #245. How hard would it be to make a subclass of SemanticSegmentationTrainer that supports the binary case? |
A first draft that I've been using lately. Works decently though sloppy. |
Probably not hard, I can likely do this soon
…On Mon, Oct 3, 2022, 6:36 PM Adam J. Stewart ***@***.***> wrote:
Ah, so this relates to #245
<#245>. How hard would it be
to make a subclass of SemanticSegmentationTrainer that supports the binary
case?
—
Reply to this email directly, view it on GitHub
<#805 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIJUTUU6CB5KVHH2L537ILWBOCYNANCNFSM6AAAAAAQ3YP2XA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
ETCI2021 is binary IIRC so shouldn't be handled the same as the multiclass
cases (we want the Jaccard of the positive class not the average Jaccard of
the negative class and positive class).
…On Mon, Oct 3, 2022 at 10:29 AM Isaac Corley ***@***.***> wrote:
@adamjstewart <https://github.com/adamjstewart> We have to manually write
metrics because we have our own custom loop over the dataset instead of
using "metrics = trainer.test(module, datamodule)" which would allow us to
use the CSVLogger callback. We should refactor to use pytorch lightning's
trainer at some point.
—
Reply to this email directly, view it on GitHub
<#805 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIJUTQBP3TRTFSM6RWPDDTWBMJYPANCNFSM6AAAAAAQ3YP2XA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Description
In torchmetrics v0.9, a change in argument name for JaccardIndex has occured: the "reduction" argument has been changed to "average". This causes undesired behaviour when calling JaccardIndex metric with pre v0.9 signature, as in evaluate.py.
I'd suggest bumping version requirement in setup.cfg and environment.yml to >0.9.
Steps to reproduce
Since there are no tests on evaluate.py, nothing currently fails because of this issue. However, this short script shows impact of out-of-date signature for JaccardIndex as used in evaluate.py.
As can be seen in result below, the default value for "average" (
average: Optional[str] = "macro"
) is used when previous "reduction" argument is given, therefore computing "macro" average when, in fact, no averaging is desired.Result:
Version
0.4.0.dev0
The text was updated successfully, but these errors were encountered: