Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for PyTorch's optimize_for_inference mode #8499

Closed
ananthsub opened this issue Jul 21, 2021 · 3 comments · Fixed by #8813
Closed

Support for PyTorch's optimize_for_inference mode #8499

ananthsub opened this issue Jul 21, 2021 · 3 comments · Fixed by #8813
Labels
feature Is an improvement or enhancement help wanted Open to be worked on
Milestone

Comments

@ananthsub
Copy link
Contributor

🚀 Feature

Leverage PyTorch's optimize_for_inference mode for performance benefits during model evaluation and inference

PyTorch has recently introduced an experimental API optimize_for_inference:

Motivation

Reap performance improvements

Pitch

This can be used during Trainer.predict in place of the no_grad if optimize_for_inference is available: https://github.com/PyTorchLightning/pytorch-lightning/blob/4c79b3a5b343866217784c66d122819c59a92c1d/pytorch_lightning/trainer/trainer.py#L1078-L1083

Alternatives

Keep as is

Additional context

If you enjoy PL, check out our other projects:

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
  • Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
  • Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
@ananthsub ananthsub added feature Is an improvement or enhancement help wanted Open to be worked on labels Jul 21, 2021
@ananthsub ananthsub added this to the v1.5 milestone Jul 21, 2021
@carmocca
Copy link
Contributor

We could also enable it for test, right?

@ananthsub
Copy link
Contributor Author

We could also enable it for test, right?

Yup! I wonder if we could with validation too, as long as we're careful to reset the mode before the next train loop call

@carmocca
Copy link
Contributor

carmocca commented Jul 21, 2021

I wonder if we could with validation too, as long as we're careful to reset the mode before the next train loop call

Not sure about this one, I can imagine somebody saving a validation inference-only tensor and trying to use it on a computation that has a training tensor. This would fail, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants