ML-Agents Release 16
ML-Agents Release 16
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v1.9.1 |
com.unity.ml-agents.extensions (C#) | v0.3.1-preview |
ml-agents (Python) | v0.25.1 |
ml-agents-envs (Python) | v0.25.1 |
gym-unity (Python) | v0.25.1 |
Communicator (C#/Python) | v1.5.0 |
Major Changes
ml-agents / ml-agents-envs / gym-unity (Python)
- The
--resume
flag now supports resuming experiments with additional reward providers or loading partial models if the network architecture has changed. See here for more details. (#5213)
Bug Fixes
com.unity.ml-agents (C#)
- Fixed erroneous warnings when using the Demonstration Recorder. (#5216)
ml-agents / ml-agents-envs / gym-unity (Python)
- Fixed an issue which was causing increased variance when using LSTMs. Also fixed an issue with LSTM when used with POCA and
sequence_length
<time_horizon
. (#5206) - Fixed a bug where the SAC replay buffer would not be saved out at the end of a run, even if
save_replay_buffer
was enabled. (#5205) - ELO now correctly resumes when loading from a checkpoint. (#5202)
- In the Python API, fixed
validate_action
to expect the right dimensions whenset_action_single_agent
is called. (#5208) - In the
GymToUnityWrapper
, raise an appropriate warning ifstep()
is called after an environment is done. (#5204) - Fixed an issue where using one of the
gym
wrappers would override user-set log levels. (#5201)