You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This way the trainer doesn't have to do any special checks for progress bars in the middle of the training control flow
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
Ok, didn't see this issue before, but sounds good feel free to go ahead. To add more context to the issue: For historical reason, the switch had to happen in the Trainer because of the spawning issue and also because the global_rank back then was not defined on Trainer init (delayed to later). Now that #10896 has landed, the progress bar can encapsulate this behavior completely.
Proposed refactor
Move this logic to individual progress bar callback implementations:
https://github.com/PyTorchLightning/pytorch-lightning/blob/6369e3b77fa3f38613b661517f6361f842f611c9/pytorch_lightning/trainer/trainer.py#L1273-L1275
Motivation
Simplifies the trainer
Avoid duplication of this logic in between trainer & spawning plugins. For example, this logic is replicated in the TPU Spawn strategy: https://github.com/PyTorchLightning/pytorch-lightning/blob/6369e3b77fa3f38613b661517f6361f842f611c9/pytorch_lightning/plugins/training_type/tpu_spawn.py#L158-L159
Avoid duplication of this logic across different running stages:
We underwent a very similar refactor for loggers here to remove rank 0 restrictions:
#8589
#8608
#7740
Pitch
enable
anddisable
can be internal implementations of the progress bar callback. This flags can be set anytime after thesetup
hook runs.This way the trainer doesn't have to do any special checks for progress bars in the middle of the training control flow
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @Borda @justusschock @awaelchli @akihironitta @SeanNaren @kaushikb11
The text was updated successfully, but these errors were encountered: