forked from openai/baselines
-
Notifications
You must be signed in to change notification settings - Fork 725
Issues: hill-a/stable-baselines
V3 new backend: PyTorch? and the future of Stable Baselines
#733
by araffin
was closed Mar 2, 2021
Closed
10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[feature request] Remove erroneous episode from replay buffer
#1197
opened Oct 4, 2024 by
WreckItTim
Using Multiple environment with unity ML agents
custom gym env
Issue related to Custom Gym Env
#1193
opened Mar 20, 2024 by
871234342
MlpPolicy network output layer softmax activation for continuous action space problem?
#1190
opened Jan 15, 2024 by
wbzhang233
[question] TypeError: 'NoneType' object is not callable with user defined env
#1181
opened Apr 11, 2023 by
Charles-Lim93
Can I use an agent with act, and observe interactions with no/minimum use of environment?
#1178
opened Jan 30, 2023 by
aheidariiiiii1993
How to create an actor-critic network with two separate LSTMs
#1177
opened Oct 28, 2022 by
ashleychung830
Custom gym Env Assertation error regarding reset () method
#1173
opened Sep 2, 2022 by
sheila-janota
[Question]Callback collected model does not have same reward as training verbose[custom gym environment]
#1170
opened Aug 15, 2022 by
hotpotking-lol
True rewards remaining "zero" in the trajectories in stable baselines2 for custom environments
custom gym env
Issue related to Custom Gym Env
question
Further information is requested
#1167
opened Jul 26, 2022 by
moizuet
Deep Q-value network evaluation in SAC algorithm
question
Further information is requested
#1166
opened Jul 19, 2022 by
moizuet
Previous Next
ProTip!
Follow long discussions with comments:>50.