Avoid configuring SyncBatchNorm
when not fitting
#9243
Labels
distributed
Generic distributed-related topic
feature
Is an improvement or enhancement
good first issue
Good for newcomers
let's do it!
approved to implement
refactor
Milestone
Proposed refactoring or deprecation
Conditionally configure syncbatchnorm in distributed plugins
Motivation
This issue is closely related to #6977
Carrying forward discussion from #9096
Related issue in PyTorch: pytorch/pytorch#48988
SyncBatchNorm doesn't sync stats when used under eval mode. We can conditionally check for whether to configure this if we're fitting vs validating/testing/predicting.
Pitch
Conditionally determine whether to configure syncbatchnorm in the module here: https://github.com/PyTorchLightning/pytorch-lightning/blob/a451997c4da89be3b1e4f7f79b52015bd32f2ea4/pytorch_lightning/plugins/training_type/ddp.py#L384-L387
Essentially rewrite this
as this
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered: