Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated SparseML callback for latest PyTorch Lightning #822

Merged
merged 7 commits into from
May 4, 2022
17 changes: 12 additions & 5 deletions pl_bolts/callbacks/sparseml.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,15 +66,22 @@ def _num_training_steps_per_epoch(self, trainer: Trainer) -> int:
else:
dataset_size = len(trainer.datamodule.train_dataloader())

num_devices = max(1, trainer.num_gpus, trainer.num_processes)
if trainer.tpu_cores:
num_devices = max(num_devices, trainer.tpu_cores)
if hasattr(trainer, 'num_devices'):
# New behavior in Lightning
num_devices = max(1, trainer.num_devices)
else:
# Old behavior deprecated in v1.6
num_devices = max(1, trainer.num_gpus, trainer.num_processes)
if trainer.tpu_cores:
num_devices = max(num_devices, trainer.tpu_cores)

effective_batch_size = trainer.accumulate_grad_batches * num_devices
max_estimated_steps = dataset_size // effective_batch_size

if trainer.max_steps and trainer.max_steps < max_estimated_steps:
return trainer.max_steps
# To avoid breaking changes, max_steps is set to -1 if it is not defined
max_steps = -1 if not trainer.max_steps else trainer.max_steps
if max_steps != -1 and max_steps < max_estimated_steps:
return max_steps
return max_estimated_steps

@staticmethod
Expand Down