Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

type object got multiple values for keyword argument 'loss' #3756

Closed
chrismaliszewski opened this issue Sep 30, 2020 · 4 comments
Closed

type object got multiple values for keyword argument 'loss' #3756

chrismaliszewski opened this issue Sep 30, 2020 · 4 comments
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@chrismaliszewski
Copy link

🐛 Bug

The error appears when TrainReport has minimize param set and loss log added at the same time with prog_bar=True

Code sample

def training_step(self, batch, batch_idx):
        loss = self(batch)
        result = pl.TrainResult(minimize=loss)
        result.log("loss", loss, prog_bar=True)

        return result

Where the problem is

I followed the code and it comes to the problem with the ProgressBar callback inside progress.py line 339 -> trainer.py line 884 (return dict(**ref_model.get_progress_bar_dict(), **self.progress_bar_metrics)) which returns

ref_model.get_progress_bar_dict()
Out[4]: {'loss': '0.692', 'v_num': 9}
self.progress_bar_metrics
Out[5]: {'loss': 0.6924866437911987}

Expected behavior

Not sure. At least the error message should be a bit clearer since a user does not create two loss logs but just one.

Environment

  • CUDA:
    • GPU:
    • available: False
    • version: None
  • Packages:
    • numpy: 1.19.1
    • pyTorch_debug: False
    • pyTorch_version: 1.6.0
    • pytorch-lightning: 0.9.0
    • tqdm: 4.49.0
  • System:
    • OS: Windows
    • architecture:
      • 64bit
      • WindowsPE
    • processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
    • python: 3.8.5
    • version: 10.0.18362
@ydcjeff
Copy link
Contributor

ydcjeff commented Oct 1, 2020

I believe TrainResult always shows the value passed to the minimize argument in the progress bar. So, you probably don't need to use result.log to show the loss in the progress bar.

@chrismaliszewski
Copy link
Author

That's true but I think the problem is the error's message.

As I said, a user may not realise why the error appears since they created just one log loss nor that it comes from prog_bar=True. Just saying.

The only idea that I have would be to remove showing minimize loss in progress bar by default and let users set loss log to be shown with prog_bar. That would eliminate the problem. Don't know that's something others would want though.

@ydcjeff
Copy link
Contributor

ydcjeff commented Oct 2, 2020

Thank you for your feedback.

Lightning believes almost all of users may want to show the training loss in the progress bar to be able to check/iterate the model quickly (like in the situation of overfitting the small batch) without the need to use loggers or something extra, so it is done by default and let the users use result.log or self.log for the other metrics or the various losses to show in the progress bar or in the loggers by epoch or step level.

NOTE: self.log is feature from the master branch.
Docs:

@chrismaliszewski
Copy link
Author

Got it. I'll close the issue. It'll be for other's people reference if they have the same problem.

@ydcjeff ydcjeff mentioned this issue Oct 2, 2020
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

No branches or pull requests

2 participants