Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Empty "outputs" argument in the on_train_batch_end() method of Callback #5517

Closed
cemanil opened this issue Jan 14, 2021 · 1 comment · Fixed by #4369
Closed

Empty "outputs" argument in the on_train_batch_end() method of Callback #5517

cemanil opened this issue Jan 14, 2021 · 1 comment · Fixed by #4369
Labels
bug Something isn't working help wanted Open to be worked on priority: 1 Medium priority task
Milestone

Comments

@cemanil
Copy link

cemanil commented Jan 14, 2021

🐛 Bug

The "outputs" argument of the 'on_train_batch_end' method of a lightning Callback seems to be empty, unless training_epoch_end() is implemented in the lightning model.

I'm looking for a way to process the outputs of training_step() in a callback. If I'm not mistaken, the "outputs" argument of the on_train_batch_end() of a lightning callback is meant for use cases like this. If I don't implement the training_epoch_end() method in my lightning model, the "outputs" argument is consistently an empty list. Implementing training_epoch_end() does fill the "outputs" argument with the output of training_step(), but I'd like to avoid this, as keeping track of all the training_step outputs for an entire epoch might be memory intensive.

Reproducing the issue

To Reproduce

The following link BoringModel contains the behaviour I'm referring to.

Expected behavior

The "outputs" argument of the on_train_batch_end() method of a lightning callback is an empty list if one comments out train_epoch_end() in the lightning model.

Environment

  • CUDA:
    • GPU:
      • Tesla T4
    • available: True
    • version: 10.1
  • Packages:
    • numpy: 1.19.5
    • pyTorch_debug: True
    • pyTorch_version: 1.7.0+cu101
    • pytorch-lightning: 1.1.4
    • tqdm: 4.41.1
  • System:
    • OS: Linux
    • architecture:
      • 64bit
    • processor: x86_64
    • python: 3.6.9
    • version: Proposal for help #1 SMP Thu Jul 23 08:00:38 PDT 2020

Additional context

If this is a feature rather than a bug, how do you recommend we use the outputs of training_step in a callback without having to track all training_step outputs for an entire epoch?

@cemanil cemanil added bug Something isn't working help wanted Open to be worked on labels Jan 14, 2021
@github-actions
Copy link
Contributor

Hi! thanks for your contribution!, great first issue!

@s-rog s-rog linked a pull request Jan 15, 2021 that will close this issue
7 tasks
@Borda Borda added the priority: 1 Medium priority task label Jan 17, 2021
@Borda Borda added this to the 1.1.x milestone Jan 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on priority: 1 Medium priority task
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants