The CommonRLInterface package provides an interface for defining and interacting with Reinforcement Learning Environments.
An important goal is to provide compatibility between different reinforcement learning (RL) environment interfaces - for example, an algorithm that uses YourRLInterface
should be able to use an environment from MyRLInterface
without depending on MyRLInterface
as long as they both support CommonRLInterface
.
By design, this package is only concerned with environments and not with policies or agents.
A few simple examples can be found in the examples directory. Detailed documentation can be found here: . A brief overview is given below:
AbstractEnv
is a base type for all environments.
The interface has five required functions for all AbstractEnv
s:
reset!(env) # returns nothing
actions(env) # returns the set of all possible actions for the environment
observe(env) # returns an observation
act!(env, a) # steps the environment forward and returns a reward
terminated(env) # returns true or false indicating whether the environment has finished
Additional behavior for an environment can be specified with the optional interface outlined in the documentation. The provided
function can be used to check whether optional behavior is provided by the environment.
Optional functions allow implementation of both sequential and simultaneous games and multi-agent (PO)MDPs
A wrapper system described in the documentation allows for easy modification of environments.
These packages are compatible with CommonRLInterface: