🎩 ⛷️ Switch to using inference_mode instead of torch no_grad #604
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of the Change
Replaced the use of torch.no_grad with the newer torch.inference_mode call. More details on inference mode can be found here: https://pytorch.org/docs/stable/generated/torch.inference_mode.html
Essentially it is an even more extreme no_grad method and can give free speedup to inference tasks. Hopefully future tweaks for this mode will come from the pytorch team so would be cool to switch to it.
Possible Drawbacks
I think this was only introduced in pytorch 1.9 so would limit what versions of pytorch can be used with pykeen. It's also possible that some of the limitations of inference mode -- for example "tensors created in inference mode will not be able to be used in computations to be recorded by autograd after exiting inference mode." may have some unintended consequences down the line.
Verification Process
Ran a few training and evaluation loops with existing code I had. Predictive performance results remained the same...also didn't see a massive amount of speedup for inference using a 1080Ti.