-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added a how-to-guide for parameter schedulers #80
Conversation
* Co-author: vfdev
This PR is modified version of PR#49 Co-authored-by: Sylvain Desroziers <sylvain.desroziers@gmail.com>
Why not improve #49 instead of creating a new PR ? Anyway. |
I feel doubtful about the added description. It’s copy / paste from official PyTorch doc and I’m not sure that some details really matter. I would have preferred something more personal for the project. |
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import ast\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Useful ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is used in MultiStepLR
for parsing the string values of Milestones.
"source": [ | ||
"## LambdaLR\n", | ||
"\n", | ||
"Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
last_epoch ? Just copy / paste sounds not enough. IMO useless and to be removed in others description cells.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I try to rephrase it?
I thought this is the simplest way to explain these schedulers. |
"Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.\n", | ||
"\n", | ||
"$$\n", | ||
"l r_{\\text {epoch}} = l r_{\\text {initial}} * Lambda(epoch)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the PyTorch doc, optimizer.step()
is called at the epoch end, it’s more flexible in ignite. The formula should not be written using epoch as index name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, my bad. I'm sorry. I didn't think in this manner.
This PR is a modified version of PR#49
@Priyansi @sdesrozis Can you please give a review?