Skip to content
/ rllab Public
forked from rll/rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms, fully compatible with OpenAI Gym.

License

Notifications You must be signed in to change notification settings

SyHeee/rllab

 
 

Repository files navigation

rllab is no longer under active development, but an alliance of researchers from several universities has adopted it, and now maintains it under the name garage.

We recommend you develop new projects, and rebase old ones, onto the actively-maintained garage codebase, to promote reproducibility and code-sharing in RL research. The new codebase shares almost all of its code with rllab, so most conversions only need to edit package import paths and perhaps update some renamed functions.

garage is always looking for new users and contributors, so please consider contributing your rllab-based projects and improvements to the new codebase! Recent improvements include first-class support for TensorFlow, TensorBoard integration, new algorithms including PPO and DDPG, updated Docker images, new environment wrappers, many updated dependencies, and stability improvements.

Docs Circle CI License Join the chat at https://gitter.im/rllab/rllab

rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms:

rllab is fully compatible with OpenAI Gym. See here for instructions and examples.

rllab only officially supports Python 3.5+. For an older snapshot of rllab sitting on Python 2, please use the py2 branch.

rllab comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.

The main modules use Theano as the underlying framework, and we have support for TensorFlow under sandbox/rocky/tf.

Documentation

Documentation is available online: https://rllab.readthedocs.org/en/latest/.

Citing rllab

If you use rllab for academic research, you are highly encouraged to cite the following paper:

Credits

rllab was originally developed by Rocky Duan (UC Berkeley / OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley / OpenAI), John Schulman (UC Berkeley / OpenAI), and Pieter Abbeel (UC Berkeley / OpenAI). The library is continued to be jointly developed by people at OpenAI and UC Berkeley.

Slides

Slides presented at ICML 2016: https://www.dropbox.com/s/rqtpp1jv2jtzxeg/ICML2016_benchmarking_slides.pdf?dl=0

About

rllab is a framework for developing and evaluating reinforcement learning algorithms, fully compatible with OpenAI Gym.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.2%
  • Jupyter Notebook 8.0%
  • JavaScript 1.5%
  • HTML 0.8%
  • Ruby 0.6%
  • CSS 0.4%
  • Other 0.5%