-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bugs in scale_batch_size #2523
Fix bugs in scale_batch_size #2523
Conversation
- Set model.batch_size based on model.hparams.batch_size if not defined - Changed init_val for batch_size to 0 so that it starts with user defined batch_size instead of 2 all the time
Hello @x2-l! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-07-06 13:07:55 UTC |
Codecov Report
@@ Coverage Diff @@
## master #2523 +/- ##
======================================
+ Coverage 88% 89% +1%
======================================
Files 69 69
Lines 5628 5641 +13
======================================
+ Hits 4963 5017 +54
+ Misses 665 624 -41 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@SkafteNicki mind check?
I am not sure this will solve your problem. Since it is only |
this should be handled by user no? its the same as enabling |
Maybe we also need to check, if self.hparams is a mapping, because then you can't use |
@x2-l
|
@SkafteNicki what bout reinitializing def replace_batch_size(self, dataloader, batch_size):
skip_keys = ['batch_size']
dl_args = {
k: v for k, v in dataloader.__dict__.items() if not k.startswith('_') and k not in skip_keys
}
dl_args['batch_size'] = batch_size
dataloader = type(dataloader)(**dl_args)
return dataloader |
@x2-l agree that should work |
@x2-l I think there is already code like this for replacing the sampler/shuffle attributes. One could factor this out and create a general function |
I agree that this is quite a monkey patch and it would be better to support |
@SkafteNicki @Borda should I still update this? or can be updated together at PR #1998? |
on second thought, why not just raise an warning that the @Borda @SkafteNicki @awaelchli thoughts? |
While I see your point, we should be careful about that. E.g. when you reinstantiate your model and overwrite the wrong parameter, you introduce unwanted behaviour somewhere else. I think I'd prefer not to implicitly introduce concurring new arguments of not necessary |
Enable scaling of batch size using hparams and removed setting the attribute on model. If both are specified, an warning is raised.
I enabled scaling of batch size using |
seems taking parts from #2223 so let's merge this after it... |
Hello! Will this and/or #2223 also fix |
This pull request is now in conflict... :( |
Hey! recently a similar PR/issue was fixed by @SkafteNicki in #2821. Could we use the lightning_attr api he added there to implement these checks here? They look very similar. |
Agree with @awaelchli that the first issue addressed in this PR is the same issue the learning rate finder had. If you could exchange all |
It looks like this user and their repository completely vanished from github. Should we copy these changes over into a new PR and continue as we discussed? |
he behaves as his nick says :D |
Fixes #2484