Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError with LightningModule forward without Trainer #9716

Closed
lucmos opened this issue Sep 27, 2021 · 5 comments
Closed

AttributeError with LightningModule forward without Trainer #9716

lucmos opened this issue Sep 27, 2021 · 5 comments
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@lucmos
Copy link
Contributor

lucmos commented Sep 27, 2021

🐛 Bug

LightningModule with Lightning 1.4 assumes to always have the self.trainer not None. No issues with 1.3

To Reproduce

import pytorch_lightning as pl


class MyModule(pl.LightningModule):
    def forward(self, *args, **kwargs):
        self.log_dict({"key": 0})


print(pl.__version__)

model = MyModule()
model()

Outputs:

1.4.0
Traceback (most recent call last):
  File "/home/luca/Projects/CookieTesting/my-new-project/src/my_new_project/commands/mymodule.py", line 11, in <module>
    model()
  File "/home/luca/miniconda3/envs/my-new-project/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/luca/Projects/CookieTesting/my-new-project/src/my_new_project/commands/mymodule.py", line 6, in forward
    self.log_dict({"key": 0})
  File "/home/luca/miniconda3/envs/my-new-project/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 507, in log_dict
    self.log(
  File "/home/luca/miniconda3/envs/my-new-project/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 407, in log
    results = self.trainer._results
AttributeError: 'NoneType' object has no attribute '_results'

Expected behavior

If the Trainer and/or logger are not defined, the log and log_dict calls should be only ignored.

Environment

* CUDA:
        - GPU:
        - available:         False
        - version:           10.2
* Packages:
        - numpy:             1.21.2
        - pyTorch_debug:     False
        - pyTorch_version:   1.9.0
        - pytorch-lightning: 1.4.0
        - tqdm:              4.62.3
* System:
        - OS:                Linux
        - architecture:
                - 64bit
                - ELF
        - processor:         x86_64
        - python:            3.8.11
        - version:           #40~20.04.1-Ubuntu SMP Sat Sep 18 02:14:19 UTC 2021

Additional context

The problem is caused by this line:

https://github.com/PyTorchLightning/pytorch-lightning/blob/ab069876cb19bb9de0179f74c6f83764876a73ff/pytorch_lightning/core/lightning.py#L398

Related PR: #7891

@carmocca

@lucmos lucmos added bug Something isn't working help wanted Open to be worked on labels Sep 27, 2021
@rohitgr7
Copy link
Contributor

self.log_dict({"key": 0})

log_dict is something specific to lightning which is handled by trainer and loggers. So I guess a trainer is required here?

@lucmos
Copy link
Contributor Author

lucmos commented Sep 27, 2021

My use case is to re-use the same LightningModule code for training (with the trainer and loggers) and inference/predictions (without the trainer and loggers)

@carmocca
Copy link
Contributor

Previous answer on this:

#8509 (comment)

@rohitgr7
Copy link
Contributor

seems reasonable. just curious, if you are logging inside forward function then how are you differentiating the log keys since forward will be used for training/validating/testing I assume?
you can actually prevent it if you want by simply not calling log_dict during inference using some sort of boolean argument.

@carmocca
Copy link
Contributor

carmocca commented Sep 27, 2021

Actually, calling self.log or self.log_dict inside forward is highly discouraged. forward should be kept a pure function to compute the network's outputs. Please call it in one of the *_step functions instead! That's exactly what they are for.

Closing this as using forward for this is not a good practice. Discussion can be continued in #8509

if you are logging inside forward function then how are you differentiating the log keys since forward will be used for training/validating/testing I assume?

It will use the *_step method name that called forward

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

No branches or pull requests

3 participants