-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PearsonCorrcoef Causing OOM #373
Labels
Milestone
Comments
Divide-By-0
added
bug / fix
Something isn't working
help wanted
Extra attention is needed
labels
Jul 14, 2021
Hi! thanks for your contribution!, great first issue! |
Borda
changed the title
Fix for PearsonCorrcoef Causing OOM
PearsonCorrcoef Causing OOM
Jul 14, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🐛 Bug
When using torchmetrics.PearsonCorrcoef as a validation metric, I get
UserWarning: Metric
PearsonCorrcoef
will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.To Reproduce
Steps to reproduce the behavior:
Code sample
Expected behavior
An iterative batch-by-batch algorithm is implemented for Pearson R calculation so memory is not maintained long-term.
For example, stack overflow suggests this (unverified algorithm) https://stackoverflow.com/a/65132700 and one would expect that Keras/other Pytorch alternatives possibly have an internal accumulative algorithm.
Environment
conda
,pip
, source): condaAdditional context
The text was updated successfully, but these errors were encountered: