-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Closed
Description
Describe the bug
Shouldn't "ignore_index" be ignored when computing accuracy?
Model training will calculate "acc_seg" when computing loss.
loss['acc_seg'] = accuracy(seg_logit, seg_label) |
It calls accuracy funciton
def accuracy(pred, target, topk=1, thresh=None): |
and the accuracy metric is calculated as
res.append(correct_k.mul_(100.0 / target.numel())) |
However, the denominator target.numel()
includes areas with ignore_index. I mean accuracy=(num of correct) / (all num without ignore). Ignore samples should not present in both numerator and denominator, right? The above implementation causes the accuracy metric to be lower than expected. I think maybe the correct codes are:
loss['acc_seg'] = accuracy(seg_logit, seg_label, ignore_index=self.ignore_index)
def accuracy(pred, target, topk=1, thresh=None, ignore_index=-100):
...
res.append(correct_k.mul_(100.0 / target[target != ignore_index].numel()))
...
Metadata
Metadata
Assignees
Labels
No labels