Skip to content

Accuracy metric computing doesn't ignore "ignore index" #1209

@jiehuang165

Description

@jiehuang165

Describe the bug
Shouldn't "ignore_index" be ignored when computing accuracy?

Model training will calculate "acc_seg" when computing loss.

loss['acc_seg'] = accuracy(seg_logit, seg_label)

It calls accuracy funciton
def accuracy(pred, target, topk=1, thresh=None):

and the accuracy metric is calculated as
res.append(correct_k.mul_(100.0 / target.numel()))

However, the denominator target.numel() includes areas with ignore_index. I mean accuracy=(num of correct) / (all num without ignore). Ignore samples should not present in both numerator and denominator, right? The above implementation causes the accuracy metric to be lower than expected. I think maybe the correct codes are:

loss['acc_seg'] = accuracy(seg_logit, seg_label, ignore_index=self.ignore_index)

def accuracy(pred, target, topk=1, thresh=None, ignore_index=-100):
    ...
    res.append(correct_k.mul_(100.0 / target[target != ignore_index].numel()))
    ...

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions