You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be great to have a feature that slows down the learning rate after the eval metric hasn't improved for a set number of rounds or after specified intervals.
I have seen this feature in a few neural network implementations where is works well.
As an example, let's say I can get a logloss of 0.5 with a 0.1 learning rate which takes 1000 training rounds after which the score doesn't improve for 50 rounds. With a 0.05 learning rate I can get a logloss of 0.48 but this takes 2000 training rounds. For the first few hundred rounds I could use a much higher rate which then slows down to optimize the last few bits. If I set the learning rate too low, the training sometimes reaches a plateau where it doesn't improve any further before reaching 0.48 at all.
We could have parameters like these:
decrease learning rate after no improvement rounds
decrease learning rate after rounds (every 500 for example)
decrease learning rate factor (0 to 1 range, 0.5 will half the learning rate)
The text was updated successfully, but these errors were encountered:
frankherfert
changed the title
Slow down learning rater during training
Slow down learning rate during training
Jun 4, 2017
This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.
It would be great to have a feature that slows down the learning rate after the eval metric hasn't improved for a set number of rounds or after specified intervals.
I have seen this feature in a few neural network implementations where is works well.
As an example, let's say I can get a logloss of 0.5 with a 0.1 learning rate which takes 1000 training rounds after which the score doesn't improve for 50 rounds. With a 0.05 learning rate I can get a logloss of 0.48 but this takes 2000 training rounds. For the first few hundred rounds I could use a much higher rate which then slows down to optimize the last few bits. If I set the learning rate too low, the training sometimes reaches a plateau where it doesn't improve any further before reaching 0.48 at all.
We could have parameters like these:
The text was updated successfully, but these errors were encountered: