Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AverageMeter and Metric Reduction Clarification #1787

Open
25benjaminli opened this issue Aug 16, 2024 · 1 comment
Open

AverageMeter and Metric Reduction Clarification #1787

25benjaminli opened this issue Aug 16, 2024 · 1 comment

Comments

@25benjaminli
Copy link

25benjaminli commented Aug 16, 2024

I am seeking clarification on the AverageMeter and metric reduction implementations in 3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb.

For instance, here is the AverageMeter implementation provided in the brats swinunetr tutorial:

class AverageMeter(object):
    def __init__(self):
        self.reset()

    def reset(self):
        self.val = 0
        self.avg = 0
        self.sum = 0
        self.count = 0

    def update(self, val, n=1):
        self.val = val
        self.sum += val * n
        self.count += n
        self.avg = np.where(self.count > 0, self.sum / self.count, self.sum)

It's similar to the ImageNet implementation apart from the last line.

ImageNet's last line: self.avg = self.sum / self.count

The reason why I think this could be problematic is because the tutorial does the following:

acc_func(y_pred=val_output_convert, y=val_labels_list)
acc, not_nans = acc_func.aggregate()
run_acc.update(acc.cpu().numpy(), n=not_nans.cpu().numpy())

This implies that anytime the model has a nan prediction and the ground truth is NOT NAN (or vice versa), the resulting dice coefficient is NOT penalized as though the prediction has a dice coefficient of zero.

For instance, let's assume the ground truth is has pixels in it, but the model predicts nothing for one class, resulting in these dice values: [0.0000, 0.8775, 0.1213]

The way that AverageMeter is currently used might simply ignore the 0 dice value in the first class if DiceMetric returned nan for the corresponding class.

However, NaNs are caused by the ground truth being NaN. What happens when the ground truth is NaN AND the predicted is not NaN, because that is also not desirable (i.e. false positive)?

Edit: please see my next comment for further clarification.

To Reproduce
Just run the notebook or use the AverageMeter implementation within your own code.

Expected behavior
Would it be possible to refactor the aggregation/average meter to yield zero if the ground truth is NaN and the predicted is NOT NaN?

@25benjaminli
Copy link
Author

25benjaminli commented Aug 16, 2024

Perhaps the intended purpose is to avoid giving weight to ground truths that have NaNs? However this implementation still doesn't make sense if we view it from that perspective.

In other words, as I see it, there are four possibilities with NaNs with ground truth (GT) and predicted (PRD).

  1. GT: Not NaN, PRD: Not NaN
  2. GT: NaN, PRD: Not NaN
  3. GT: Not NaN, PRD: NaN
  4. GT: NaN, PRD: NaN

If a parameter like ignore_empty is set to true, scenarios 2 & 4 will be set to NaN. Scenario 4 makes sense because we want to ignore any dice calculation where both ground truth and prediction are NaN. However, what about scenario 2 (false positive), which is not desirable?

@25benjaminli 25benjaminli changed the title AverageMeter not used properly in swin unetr brain tumor tutorial? AverageMeter and Metric Reduction Clarification Aug 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant