-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add examples highlighting the learning process with state-of-the-art RL algorithms #51
Comments
Proposal: Use available, pre-fabricated RL toolboxes (Pytorch -> Stable Baseline3, Tensorflow -> Tensorforce / Keras-RL2). |
blocked by #101 |
Webbah
pushed a commit
that referenced
this issue
Apr 7, 2021
Webbah
pushed a commit
that referenced
this issue
Apr 16, 2021
Webbah
pushed a commit
that referenced
this issue
Apr 16, 2021
Webbah
pushed a commit
that referenced
this issue
Apr 27, 2021
Webbah
pushed a commit
that referenced
this issue
Apr 27, 2021
Webbah
pushed a commit
that referenced
this issue
Apr 28, 2021
Webbah
pushed a commit
that referenced
this issue
May 17, 2021
Webbah
pushed a commit
that referenced
this issue
May 19, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Based on the available expert controller design examples (PI-based inner current/voltage control + droop control for power sharing) it will be very interesting to highlight the shortcomings and adavantages of applying state-of-the-art RL algorithms as a replacement for the expert-based controllers.
The text was updated successfully, but these errors were encountered: