Skip to content

High throughput synchronous and asynchronous reinforcement learning

License

Notifications You must be signed in to change notification settings

daveey/sample-factory

 
 

Repository files navigation

tests codecov pre-commit docs Code style: black Imports: isort GitHub license Downloads

Sample Factory

High-throughput reinforcement learning codebase. Version 2 is out! 🤗

Resources:

What is Sample Factory?

Sample Factory is one of the fastest RL libraries focused on very efficient synchronous and asynchronous implementations of policy gradients (PPO).

Sample Factory is thoroughly tested and used by many researchers and practitioners. Our implementation is known to reach state-of-the-art (SOTA) performance across a wide range of domains, while minimizing the required training time and hardware requirements. Clips below demonstrate ViZDoom, IsaacGym, DMLab-30, Megaverse, Mujoco, and Atari agents trained with Sample Factory:

VizDoom agents traned using Sample Factory 2.0 IsaacGym agents traned using Sample Factory 2.0
DMLab-30 agents traned using Sample Factory 2.0 Megaverse agents traned using Sample Factory 2.0
Mujoco agents traned using Sample Factory 2.0 Atari agents traned using Sample Factory 2.0

Key features:

This Readme provides only a brief overview of the library. Visit full documentation at https://samplefactory.dev for more details.

Installation

Just install from PyPI:

pip install sample-factory

SF is known to work on Linux and macOS. There is no Windows support at this time. Please refer to the documentation for additional environment-specific installation notes.

Quickstart

Use command line to train an agent using one of the existing integrations, e.g. Mujoco (might need to run pip install sample-factory[mujoco]):

python -m sf_examples.mujoco.train_mujoco --env=mujoco_ant --experiment=Ant --train_dir=./train_dir

Stop the experiment (Ctrl+C) when the desired performance is reached and then evaluate the agent:

python -m sf_examples.mujoco.enjoy_mujoco --env=mujoco_ant --experiment=Ant --train_dir=./train_dir

# Or use an alternative eval script, no rendering but much faster! (use `sample_env_episodes` >= `num_workers` * `num_envs_per_worker`).
python -m sf_examples.mujoco.fast_eval_mujoco --env=mujoco_ant --experiment=Ant --train_dir=./train_dir --sample_env_episodes=128 --num_workers=16 --num_envs_per_worker=2

Do the same in a pixel-based VizDoom environment (might need to run pip install sample-factory[vizdoom], please also see docs for VizDoom-specific instructions):

python -m sf_examples.vizdoom.train_vizdoom --env=doom_basic --experiment=DoomBasic --train_dir=./train_dir --num_workers=16 --num_envs_per_worker=10 --train_for_env_steps=1000000
python -m sf_examples.vizdoom.enjoy_vizdoom --env=doom_basic --experiment=DoomBasic --train_dir=./train_dir

Monitor any running or completed experiment with Tensorboard:

tensorboard --logdir=./train_dir

(or see the docs for WandB integration).

To continue from here, copy and modify one of the existing env integrations to train agents in your own custom environment. We provide examples for all kinds of supported environments, please refer to the documentation for more details.

Acknowledgements

This project would not be possible without amazing contributions from many people. I would like to thank:

  • Vladlen Koltun for amazing guidance and support, especially in the early stages of the project, for helping me solidify the ideas that eventually became this library.
  • My academic advisor Gaurav Sukhatme for supporting this project over the years of my PhD and for being overall an awesome mentor.
  • Zhehui Huang for his contributions to the original ICML submission, his diligent work on testing and evaluating the library and for adopting it in his own research.
  • Edward Beeching for his numerous awesome contributions to the codebase, including hybrid action distributions, new version of the custom model builder, multiple environment integrations, and also for promoting the library through the HuggingFace integration!
  • Andrew Zhang and Ming Wang for numerous contributions to the codebase and documentation during their HuggingFace internships!
  • Thomas Wolf and others at HuggingFace for the incredible (and unexpected) support and for the amazing work they are doing for the open-source community.
  • Erik Wijmans for feedback and insights and for his awesome implementation of RNN backprop using PyTorch's PackedSequence, multi-layer RNNs, and other features!
  • Tushar Kumar for contributing to the original paper and for his help with the fast queue implementation.
  • Costa Huang for developing CleanRL, for his work on benchmarking RL algorithms, and for awesome feedback and insights!
  • Denys Makoviichuk for developing rl_games, a very fast RL library, for inspiration and feedback on numerous features of this library (such as return normalizations, adaptive learning rate, and others).
  • Eugene Vinitsky for adopting this library in his own research and for his valuable feedback.
  • All my labmates at RESL who used Sample Factory in their projects and provided feedback and insights!

Huge thanks to all the people who are not mentioned here for your code contributions, PRs, issues, and questions! This project would not be possible without a community!

Citation

If you use this repository in your work or otherwise wish to cite it, please make reference to our ICML2020 paper.

@inproceedings{petrenko2020sf,
  author    = {Aleksei Petrenko and
               Zhehui Huang and
               Tushar Kumar and
               Gaurav S. Sukhatme and
               Vladlen Koltun},
  title     = {Sample Factory: Egocentric 3D Control from Pixels at 100000 {FPS}
               with Asynchronous Reinforcement Learning},
  booktitle = {Proceedings of the 37th International Conference on Machine Learning,
               {ICML} 2020, 13-18 July 2020, Virtual Event},
  series    = {Proceedings of Machine Learning Research},
  volume    = {119},
  pages     = {7652--7662},
  publisher = {{PMLR}},
  year      = {2020},
  url       = {http://proceedings.mlr.press/v119/petrenko20a.html},
  biburl    = {https://dblp.org/rec/conf/icml/PetrenkoHKSK20.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

For questions, issues, inquiries please join Discord. Github issues and pull requests are welcome! Check out the contribution guidelines.

About

High throughput synchronous and asynchronous reinforcement learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.0%
  • Other 2.0%