This repository contains the code and benchmarks for the paper "Efficient Parallel Reinforcement Learning Framework Using the Reactor Model". The paper proposes a solution implementing the reactor model, which enforces a set of actors to have a fixed communication pattern, allowing for efficient orchestration of training, serving, and simulation workloads in Reinforcement Learning tasks.
The repository includes the following folders and files:
Number_of_Actors
: Benchmarks for evaluating the performance with varying number of actors.Object_Size
: Benchmarks for evaluating the performance with different object sizes.Gym_Environments
: Benchmarks for OpenAI Gym environments.Atari_Environments
: Benchmarks for Atari environments.Parallel_Q_learning
: Benchmarks for synchronized parallel Q-learning.Multi_Agent_Inference
: Benchmarks for multi-agent RL inference.Dataflow_Graph
: Template for generating dataflow graph..gitignore
: Git ignore file.Dockerfile
: Dockerfile for running the benchmarks.
Reproduce the benchmarks using an OS virtualization environment with Docker.
docker build -t parallel_rl_benchmarks .
docker run parallel_rl_benchmarks