Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix inefficiency in rich progress bar #18369

Merged
merged 5 commits into from
Aug 23, 2023
Merged

Fix inefficiency in rich progress bar #18369

merged 5 commits into from
Aug 23, 2023

Conversation

quintenroets
Copy link
Contributor

@quintenroets quintenroets commented Aug 22, 2023

What does this PR do?

Fixes #18366

Before submitting
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Aug 22, 2023
Copy link
Member

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, could you pls add some minimal benchmarks for this just to have an idea of how it helps... 🐰

src/lightning/pytorch/callbacks/progress/rich_progress.py Outdated Show resolved Hide resolved
@mergify mergify bot added the ready PRs ready to be merged label Aug 22, 2023
@quintenroets
Copy link
Contributor Author

quintenroets commented Aug 22, 2023

nice, could you pls add some minimal benchmarks for this just to have an idea of how it helps... 🐰

I created a small benchmark script that invokes the render function 10 000 times for both implementations, comparing their timings. The difference between both approaches becomes more significant as the number of logged metrics grows. The benchmark compares the approaches for the number of logged metrics ranging from 0 to 99.

The images display the benchmark results obtained from two distinct processors, respectively:

  • Intel® Celeron® N5095A @ 2.00GHz
  • AMD Ryzen Threadripper 3960X 24-Core Processor

For logged metrics in the range of 90-100, there's an average performance gain of approximately 10%.
Furthermore, the new approach aligns better with Pythonic conventions. This allows for implementing the next PR in a more readable way.
results_intel
results_amd

This is the corresponding script:

import timeit
from typing import cast

import matplotlib.pyplot as plt
from rich import get_console, reconfigure
from rich.progress import Task, TaskID
from rich.text import Text

from lightning.pytorch import Trainer
from lightning.pytorch.callbacks import RichProgressBar
from lightning.pytorch.callbacks.progress.rich_progress import (
    CustomProgress,
    MetricsTextColumn,
)
from lightning.pytorch.demos.boring_classes import BoringModel


class OptimizedTextColumn(MetricsTextColumn):
    def render(self, task: "Task") -> Text:
        assert isinstance(self._trainer.progress_bar_callback, RichProgressBar)
        if (
            self._trainer.state.fn != "fit"
            or self._trainer.sanity_checking
            or self._trainer.progress_bar_callback.train_progress_bar_id != task.id
        ):
            return Text()
        if self._trainer.training and task.id not in self._tasks:
            self._tasks[task.id] = "None"
            if self._renderable_cache:
                self._current_task_id = cast(TaskID, self._current_task_id)
                self._tasks[self._current_task_id] = self._renderable_cache[
                    self._current_task_id
                ][1]
            self._current_task_id = task.id
        if self._trainer.training and task.id != self._current_task_id:
            return self._tasks[task.id]

        text = " ".join(self._generate_metrics_texts())
        return Text(text, justify="left", style=self._style)

    def _generate_metrics_texts(self):
        for k, v in self._metrics.items():
            yield f"{k}: {round(v, 3) if isinstance(v, float) else v}"


class OptimizedProgressBar(RichProgressBar):
    def _init_progress(self, trainer: "pl.Trainer") -> None:
        if self.is_enabled and (self.progress is None or self._progress_stopped):
            self._reset_progress_bar_ids()
            reconfigure(**self._console_kwargs)
            self._console = get_console()
            self._console.clear_live()
            self._metric_component = OptimizedTextColumn(trainer, self.theme.metrics)
            self.progress = CustomProgress(
                *self.configure_columns(trainer),
                self._metric_component,
                auto_refresh=False,
                disable=self.is_disabled,
                console=self._console,
            )
            self.progress.start()
            # progress has started
            self._progress_stopped = False


class CustomModel(BoringModel):
    def __init__(self, number_of_metrics=None):
        self.number_of_metrics = number_of_metrics
        super().__init__()

    def training_step(self, *args, **kwargs):
        res = super().training_step(*args, **kwargs)
        loss = res["loss"]
        self.log_loss("train", loss)

    def log_loss(self, phase, loss):
        for i in range(self.number_of_metrics):
            self.log(f"{phase}_loss_{i}", loss, prog_bar=True)


class BenchMarker:
    n_experiments: int = int(1e5)

    def start(self):
        original_timings = []
        optimized_timings = []
        number_of_metrics_list = []

        for number_of_metrics in range(100):
            original, optimized = self.generate_timings(number_of_metrics)
            original_timings.append(original)
            optimized_timings.append(optimized)
            number_of_metrics_list.append(number_of_metrics)

        print(original_timings)
        print(optimized_timings)

        plt.plot(number_of_metrics_list, original_timings, label="original timings")
        plt.plot(number_of_metrics_list, optimized_timings, label="optimized timings")
        plt.xlabel("Number of logged metrics")
        plt.ylabel(f"Timing of {self.n_experiments:.1e} function calls [s]")
        plt.legend()
        plt.show()

    def generate_timings(self, number_of_metrics: int):
        original_progress_bar = RichProgressBar()
        optimized_progress_bar = OptimizedProgressBar()
        for progress_bar in (original_progress_bar, optimized_progress_bar):
            self.setup_progress_bar(progress_bar, number_of_metrics)
            task = progress_bar.progress.tasks[0]
            metrics_column = progress_bar.progress.columns[-1]
            yield timeit.timeit(
                lambda: metrics_column.render(task), number=self.n_experiments
            )

    @classmethod
    def setup_progress_bar(cls, progress_bar, number_of_metrics):
        trainer = Trainer(
            num_sanity_val_steps=0,
            max_epochs=1,
            limit_train_batches=1,
            limit_val_batches=1,
            check_val_every_n_epoch=100,
            callbacks=progress_bar,
        )
        model = CustomModel(number_of_metrics=number_of_metrics)
        trainer.fit(model)


if __name__ == "__main__":
    BenchMarker().start()

Please let me know if I can provide additional information or if you would like to see some additional benchmarks!

@awaelchli awaelchli added this to the 2.1 milestone Aug 22, 2023
Copy link
Contributor

@carmocca carmocca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@awaelchli awaelchli changed the title Fix inefficiency rich progress bar Fix inefficiency in rich progress bar Aug 23, 2023
@awaelchli awaelchli merged commit 1867bd7 into Lightning-AI:master Aug 23, 2023
82 checks passed
Borda pushed a commit that referenced this pull request Aug 28, 2023
lantiga pushed a commit that referenced this pull request Aug 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community This PR is from the community performance pl Generic label for PyTorch Lightning package progress bar: rich ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Inefficiency in Rich Progress bar
4 participants