You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug Description
The documentation does not explain that the MONAI inferers accumulate backpropagation information with each additional batch. The name "inferer" inherently implies no gradient information should be stored.
This took a few hours of detailed debugging to find the root cause, and creates a difficult user experience.
To Reproduce
Write a simple inference program using ScanningWindowInferer where the inference is performed on GPU. On CPU gradient info accumulation drastically slows down inference speed.
Expected behavior
The documentation should state that the inferers should be wrapped with torch.no_grad(), or the current inferer classes should be renamed with ScanningWindowForwardPropagator and wrapped with ScanningWindowInferer which internally uses torch.no_grad().
The text was updated successfully, but these errors were encountered:
Bug Description
The documentation does not explain that the MONAI inferers accumulate backpropagation information with each additional batch. The name "inferer" inherently implies no gradient information should be stored.
This took a few hours of detailed debugging to find the root cause, and creates a difficult user experience.
To Reproduce
Write a simple inference program using
ScanningWindowInferer
where the inference is performed on GPU. On CPU gradient info accumulation drastically slows down inference speed.Expected behavior
The documentation should state that the inferers should be wrapped with
torch.no_grad()
, or the current inferer classes should be renamed withScanningWindowForwardPropagator
and wrapped withScanningWindowInferer
which internally usestorch.no_grad()
.The text was updated successfully, but these errors were encountered: