-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Config option to enable/disable batchnorm running values #13
Comments
Thanks for your valuable suggestion/feedback @josedvq! Apologies for the delayed response, I've have limited bandwidth to dedicate to Finetuning Scheduler between minor releases so am just preparing FTS for the upcoming 2.3.0 Lightning release now. Root CauseAbout 19 months ago FTS set a default of In light of your suggestion I've just pushed a commit that I think accordingly improves FTS usability... New State w/ Usability Improvements - (FTS >= v2.3.0)The commit will be released with the next Lightning/FTS minor release ( Hope you've found FTS of some utility once working through the issue you encountered. Thanks again for your contribution, you've helped improve FTS for everyone! Feel free to reach out anytime if you have other issues or want to share more about your use case. Best of luck with your work! |
fixed with commit 9d7014b |
Thanks for sharing this great tool
🚀 Feature
Add constructor argument
batch_norm_track_running_stats
to controltrack_running_stats
of frozen batch norm layers, or avoid changingtrack_running_stats
of batchnorm layers.Motivation
Currently FinetuningSchedular calls
self.freeze(modules=pl_module, train_bn=False)
. That in turns executes this code for each model module:This can be confusing because setting
track_running_stats=False
is not normally done when fine-tunning. It also silently changes the values set when creating the model. The default value oftrack_running_stats
is True so when one fine-tunes a model following the standard procedure of settingrequires_grad
normallytrack_running_stats
will be True unless set otherwise. I'm not sure why it is implemented this way inBaseFinetuning
Pitch
Adding a constructor argument to provide control over the batchnorm behavior when using the module. I think the default should be to set
track_running_stats=True
to reproduce the standard "recipe" for fine-tuning.Alternatively do not alter track_running_stats at all.
At the very least I think it would be important to document the current behavior.
The text was updated successfully, but these errors were encountered: