You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered:
I wonder if we could with validation too, as long as we're careful to reset the mode before the next train loop call
Not sure about this one, I can imagine somebody saving a validation inference-only tensor and trying to use it on a computation that has a training tensor. This would fail, right?
🚀 Feature
Leverage PyTorch's optimize_for_inference mode for performance benefits during model evaluation and inference
PyTorch has recently introduced an experimental API
optimize_for_inference
:Motivation
Reap performance improvements
Pitch
This can be used during
Trainer.predict
in place of theno_grad
if optimize_for_inference is available: https://github.com/PyTorchLightning/pytorch-lightning/blob/4c79b3a5b343866217784c66d122819c59a92c1d/pytorch_lightning/trainer/trainer.py#L1078-L1083Alternatives
Keep as is
Additional context
If you enjoy PL, check out our other projects:
The text was updated successfully, but these errors were encountered: