-
Notifications
You must be signed in to change notification settings - Fork 538
[FEATURE] Add "Reduce LR On Plateau" scheduler #897
base: v0.x
Are you sure you want to change the base?
Conversation
Job PR-897/1 is complete. |
else: # mode == 'max': | ||
self.mode_worse = -np.Inf | ||
|
||
self.is_better = partial(self._cmp, mode, threshold_mode, threshold) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you want to use partial? mode, threshold_mode, threshold
are simply class variables, so you can simply use them by adding self.
to the prefix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. I will fix.
def in_cooldown(self): | ||
return self.cooldown_counter > 0 | ||
|
||
def _cmp(self, mode, threshold_mode, threshold, a, best): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current design does not look very scalable. That means it requires hard coding if we would like add some changes/schedules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comments.
I can consider more flexible class desine.
@haven-jeon gentle ping |
Description
The LR scheduler that reduce learning rate when a metric has stopped improving. #887
Modified code from https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau
Checklist
Essentials
Changes
Comments