The library as it is allows for easy training for DDPG based models and DQN based models. You can also save them, and reload them.
You can use the different interpreter objects for graphing rewards, comparing rewards with other models, viewing episodes at different periods of the agent's training, etc.
Notes:
Currently, the next obstacle is memory efficiency. We will be adding more models, but will also be addressing memory issues possibly by off loading to storage.