Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🎩 ⛷️ Switch to using inference_mode instead of torch no_grad #604

Merged
merged 5 commits into from
Nov 8, 2021

Conversation

sbonner0
Copy link
Contributor

Description of the Change

Replaced the use of torch.no_grad with the newer torch.inference_mode call. More details on inference mode can be found here: https://pytorch.org/docs/stable/generated/torch.inference_mode.html

Essentially it is an even more extreme no_grad method and can give free speedup to inference tasks. Hopefully future tweaks for this mode will come from the pytorch team so would be cool to switch to it.

Possible Drawbacks

I think this was only introduced in pytorch 1.9 so would limit what versions of pytorch can be used with pykeen. It's also possible that some of the limitations of inference mode -- for example "tensors created in inference mode will not be able to be used in computations to be recorded by autograd after exiting inference mode." may have some unintended consequences down the line.

Verification Process

Ran a few training and evaluation loops with existing code I had. Predictive performance results remained the same...also didn't see a massive amount of speedup for inference using a 1080Ti.

@mberr
Copy link
Member

mberr commented Oct 17, 2021

Sounds good for me. From my point of view, introducing a torch>=1.9.0 requirement is not a problem - we generally develop code to be compliant with the latest stable release of the dependencies although, of course, it is nice if they also work with older versions.

@mberr
Copy link
Member

mberr commented Oct 17, 2021

this requirement is already in the setup.cfg , cf.

torch>=1.9; platform_system != "Windows"

@cthoyt
Copy link
Member

cthoyt commented Nov 7, 2021

@PyKEEN-bot test

@cthoyt cthoyt changed the title Switch to using inference_mode instead of torch no_grad 🎩 ⛷️ Switch to using inference_mode instead of torch no_grad Nov 8, 2021
@cthoyt
Copy link
Member

cthoyt commented Nov 8, 2021

@PyKEEN-bot test

@cthoyt cthoyt merged commit cafc802 into pykeen:master Nov 8, 2021
@cthoyt
Copy link
Member

cthoyt commented Nov 8, 2021

Thank you @sbonner0!

@sbonner0 sbonner0 deleted the feature/inference_mode branch November 10, 2021 10:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants