Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Learning Rate Annealing to PPO #22

Closed
awjuliani opened this issue Sep 21, 2017 · 2 comments
Closed

Add Learning Rate Annealing to PPO #22

awjuliani opened this issue Sep 21, 2017 · 2 comments
Assignees

Comments

@awjuliani
Copy link
Contributor

Current implementation of PPO uses fixed learning rate for duration of training process. This can produce degenerate models later in training, when a smaller learning rate is necessary.

Learning rate should be annealed over time to 0.

@awjuliani awjuliani self-assigned this Sep 21, 2017
@awjuliani
Copy link
Contributor Author

Addressed in 77b04d1

@lock
Copy link

lock bot commented Jan 5, 2020

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Jan 5, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant