Skip to content

Apply ProgressBar Refresh among all epoch and not across train and validation. #11342

@tchaton

Description

@tchaton

Proposed refactor

From ##11213 (comment) conversation, the refresh rate from the Progress Bar is computed as follows:

# train_batches, val_batches, refresh_rate
[5, 3, 6]
would give train_updates, val_updates on the following global batch_idx
[8], [3]

This means, the train_batches + val_batches aren't used as a single unit but instead the refresh counter is restarted at the beginning of the validation loop.

Instead, we should have

[8], [1, 3]

The 1 comes as 5 batches from the train were processed and 1 batch from the validation which makes the total number of batches seen divisible by 6.

Should we refactor the code to account for refreshing over the total number of batches seen in the epoch 🚀 or leave it as is 😊.

Motivation

Pitch

Additional context


If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.

  • Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

  • Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions