-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lazy initialize properties in Strategies #11097
Comments
Is this similar to #7650 ? |
I didn't want to suggest closing this, just trying to understand what your proposal is. We are already doing lazy initialization to some degree, right? Is there anything you would change with the current properties? That's what I'm trying to understand. |
@awaelchli I thought the Initial version of #11071 kinda removed the lazy initialization, so I created this issue to make sure we are on the same page and gonna keep the subtle logic. Now Carlos's PR kept the lazy initialization and fixed checkpoint_io to be lazy initialized as well. So I think this proposal already achieved and we can close this issue now. In term of long term solution, I think yours #7650 is better than mine :) And yes, these two is for the same goal. But can I ask why it marked won't fix? |
Proposed refactor
Strategy has four properties in init : Accelerator, Precision_plugin, Checkpoint_io and Cluster_enviroment.
User could pass in a strategy class or a strategy str into trainer.
This issue propose lazy initialization for strategy properties. @awaelchli and @ananthsub also bought this up before
Motivation
For correctness, maintenance and enable future simplifications
Pitch
Current Training_type_plugin init will set default value to precision_plugin and checkpoint_io:
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/plugins/training_type/training_type_plugin.py#L40-L58
Proposal
In accelerator_connector.py, we know:
training_type_plugin._precision_plugin
before we call setter is what user passed in, not the default value.Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta @kaushikb11 @ananthsub
The text was updated successfully, but these errors were encountered: