Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some metrics are handling absent values ​​incorrectly #1017

Closed
faber6911 opened this issue May 8, 2022 · 2 comments · Fixed by #1195
Closed

Some metrics are handling absent values ​​incorrectly #1017

faber6911 opened this issue May 8, 2022 · 2 comments · Fixed by #1195
Assignees
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Milestone

Comments

@faber6911
Copy link

faber6911 commented May 8, 2022

🐛 Bug

Some metrics such as Accuracy, Precision, Recall and F1Score are handling the absent values incorrectly.
A value absent in target and pred, and therefore correctly predicted, is considered incorrect.

To Reproduce

Steps to reproduce the behavior...

  • Initialize the metric with num_classes = 2, mdmc_average = "samplewise" and average = "none";
  • create a batch of 2 elements predicted correctly but in one of the two elements of the batch one of the classes must be absent.

Code sample

import torch
from torchmetrics import Accuracy

target = torch.tensor(
    [
        [0,0,0,0],
        [0,0,1,1],
    ]
)
preds = torch.tensor(
    [
        [0,0,0,0],
        [0,0,1,1],
    ]
)

acc = Accuracy(num_classes=2, average="none", mdmc_average="samplewise")
actual_result = acc(pred, target)

expected_result = torch.tensor([1., 1.])
assert torch.equal(expected_result, actual_result)

Expected behavior

The result should be torch.tensor([1., 1.]) because the two classes are predicted correctly for both elements of the batch.
In fact, in the first element of the batch the absence of class 1 is expected by the target tensor.
Despite this the result of the metric is torch.tensor([1., .5]), because in the first element of the batch the value of the metric for class 1 is 0.0

Environment

  • OS (e.g., Linux): Linux
  • Python & PyTorch Version (e.g., 1.0): 3.8.12 & 1.11
  • How you installed PyTorch (conda, pip, build command if you used source): pip
  • Any other relevant information:

Additional context

The metric JaccardIndex provide the argument absent_score to handle such cases.

@faber6911 faber6911 added bug / fix Something isn't working help wanted Extra attention is needed labels May 8, 2022
@faber6911 faber6911 changed the title Some Metrics are handling absent values ​​incorrectly Some metrics are handling absent values ​​incorrectly May 8, 2022
@github-actions
Copy link

github-actions bot commented May 8, 2022

Hi! thanks for your contribution!, great first issue!

@SkafteNicki SkafteNicki modified the milestones: v0.9, v0.10 May 12, 2022
@SkafteNicki
Copy link
Member

Issue will be fixed by classification refactor: see this issue #1001 and this PR #1195 for all changes

Small recap: This issue describe that accuracy metric is not computing the right value in the binary setting. The problem with the current implementation is that the metric are calculated as average over the 0 and 1 class, which is wrong.

After the refactor this has been fixed. Using the new binary_* version of the metric on the provided example:

from torchmetrics.functional import binary_accuracy
import torch
target = torch.tensor(
    [
        [0,0,0,0],
        [0,0,1,1],
    ]
)
preds = torch.tensor(
    [
        [0,0,0,0],
        [0,0,1,1],
    ]
)
binary_accuracy(preds, target, multidim_average="samplewise")  # tensor([1., 1.])

which give the correct result.
Issue will be closed when #1195 is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants