Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JaccardIndex: in torchmetrics v0.9, "average" argument replaces "reduction" #805

Closed
remtav opened this issue Oct 3, 2022 · 8 comments · Fixed by #816
Closed

JaccardIndex: in torchmetrics v0.9, "average" argument replaces "reduction" #805

remtav opened this issue Oct 3, 2022 · 8 comments · Fixed by #816
Labels
scripts Training and evaluation scripts

Comments

@remtav
Copy link
Contributor

remtav commented Oct 3, 2022

Description

In torchmetrics v0.9, a change in argument name for JaccardIndex has occured: the "reduction" argument has been changed to "average". This causes undesired behaviour when calling JaccardIndex metric with pre v0.9 signature, as in evaluate.py.

I'd suggest bumping version requirement in setup.cfg and environment.yml to >0.9.

Steps to reproduce

Since there are no tests on evaluate.py, nothing currently fails because of this issue. However, this short script shows impact of out-of-date signature for JaccardIndex as used in evaluate.py.

import torch
from torchmetrics import MetricCollection, Accuracy, JaccardIndex

y_pred = torch.tensor([0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1])
y = torch.tensor([1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1])

metrics_previous = MetricCollection(
    [Accuracy(num_classes=2), JaccardIndex(num_classes=2, reduction="none")]
)

metrics_current = MetricCollection(
    [Accuracy(num_classes=2), JaccardIndex(num_classes=2, average="none")]
)

metrics_previous(y_pred, y)
metrics_current(y_pred, y)

results_prev = metrics_previous.compute()
results_curr = metrics_current.compute()

print("<0.9: ", results_prev)
print(">=0.9: ", results_curr)

As can be seen in result below, the default value for "average" (average: Optional[str] = "macro") is used when previous "reduction" argument is given, therefore computing "macro" average when, in fact, no averaging is desired.

Result:

<0.9:  {'Accuracy': tensor(0.5625), 'JaccardIndex': tensor(0.3611)}
>=0.9:  {'Accuracy': tensor(0.5625), 'JaccardIndex': tensor([0.2222, 0.5000])}

Version

0.4.0.dev0

@adamjstewart
Copy link
Collaborator

@calebrob6 do you remember why we have custom code in evaluate.py specific to ETCI2021? Also, why do we need to manually write metrics to CSV files, can't we just use CSVLogger á la #705?

@adamjstewart
Copy link
Collaborator

If we do keep the code as is, we can check the version of torchmetrics being used and conditionally change the kwarg name.

@adamjstewart adamjstewart added the scripts Training and evaluation scripts label Oct 3, 2022
@isaaccorley
Copy link
Collaborator

@adamjstewart We have to manually write metrics because we have our own custom loop over the dataset instead of using "metrics = trainer.test(module, datamodule)" which would allow us to use the CSVLogger callback. We should refactor to use pytorch lightning's trainer at some point.

@calebrob6
Copy link
Member

ETCI2021 is binary IIRC so shouldn't be handled the same as the multiclass cases (we want the Jaccard of the positive class not the average Jaccard of the negative class and positive class).

@adamjstewart
Copy link
Collaborator

Ah, so this relates to #245. How hard would it be to make a subclass of SemanticSegmentationTrainer that supports the binary case?

@remtav
Copy link
Contributor Author

remtav commented Oct 4, 2022

Ah, so this relates to #245. How hard would it be to make a subclass of SemanticSegmentationTrainer that supports the binary case?

A first draft that I've been using lately. Works decently though sloppy.
https://github.com/remtav/torchgeo/blob/ccmeo-dataset/torchgeo/trainers/segmentation.py#L215
Also, this torchmetric bug seems fixed in latest version.

@calebrob6
Copy link
Member

calebrob6 commented Oct 11, 2022 via email

@calebrob6
Copy link
Member

calebrob6 commented Oct 11, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
scripts Training and evaluation scripts
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants