The Parallel Evolutionary and Reinforcement Learning Library (Pearl) is a pytorch based package with the goal of being excellent for rapid prototyping of new adaptive decision making algorithms in the intersection between reinforcement learning (RL) and evolutionary computation (EC). As such, this is not intended to provide template pre-built algorithms as a baseline, but rather flexible tools to allow the user to quickly build and test their own implementations and ideas. A technical report and separate tutorial repo using Google Collab are also included to introduce users to the library.
Features | Pearl |
---|---|
Model Free RL algorithms (e.g. Actor Critic) | ✔️ |
Model Based RL algorithms (e.g. Dyna-Q) | ✔️ |
EC algorithms (e.g. Genetic Algorithm) | ✔️ |
Hybrid algorithms (e.g. CEM-DDPG) | ✔️ |
Multi-agent suppport | ✔️ |
Tensorboard integration | ✔️ |
Modular and extensible components | ✔️ |
Opinionated module settings | ✔️ |
Custom callbacks | ✔️ |
There are two options to install this package:
pip install pearll
git clone git@github.com:LondonNode/Pearl.git
agents
: implementations of RL and EC agents where the other modular components are put togetherbuffers
: these handle storing and sampling of trajectoriescallbacks
: inject logic for every step made in an environment (e.g. save model, early stopping)common
: common methods applicable to all other modules (e.g. enumerations) and a mainutils.py
file with some useful general logicexplorers
: action explorers for enhanced exploration by adding noise to actions and random exploration for first n stepsmodels
: neural network structures which are structured asencoder
->torso
->head
signal_processing
: signal processing logic for extra modularity (e.g. TD returns, GAE)updaters
: update neural networks and adaptive/iterative algorithmssettings.py
: settings objects for the above components, can be extended for custom components
See pearll/agents/templates.py
for the templates to create your own agents!
For more examples, see specific agent implementations under pearll/agents
.
To see training performance, use the command tensorboard --logdir runs
or tensorboard --logdir <tensorboard_log_path>
defined in your algorithm class initialization.
To run these you'll need to go to wherever the library is installed, cd pearll
.
demo.py
: script to run very basic demos of agents with pre-defined hyperparameters, runpython3 -m pearll.demo -h
for more infoplot.py
: script to plot more complex plots that can't be obtained via Tensorboard (e.g. multiple subplots), runpython3 -m pearll.plot -h
for more info
Linux
scripts/setup_dev.sh
: setup your virtual environmentscripts/run_tests.sh
: run tests
Windows
scripts/windows_setup_dev.bat
: setup your virtual environmentscripts/windows_run_tests.bat
: run tests
Pearl uses poetry for dependency management and build release instead of pip. As a quick guide:
- Run
poetry add [package]
to add more package dependencies. - Poetry automatically handles the virtual environment used, check
pyproject.toml
for specifics on the virtual environment setup. - If you want to run something in the poetry virtual environment, add
poetry run
as a prefix to the command you want to execute. For example, to run a python file:poetry run python3 script.py
.
@misc{tangri2022pearl,
title={Pearl: Parallel Evolutionary and Reinforcement Learning Library},
author={Rohan Tangri and Danilo P. Mandic and Anthony G. Constantinides},
year={2022},
eprint={2201.09568},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Pearl was inspired by Stable Baselines 3 and Tonic