truman is a package that implements suites of environments (system simulations) exhibiting behaviours of real world large scale systems, e.g. changes in online consumer cohort conversion as a function of changes in product price.
truman is not an environment for training reinforcement learning agents, but aims to be an effective way to develop and validate one-shot optimal decision-making agents that perform well on unique systems that can’t be reliably simulated and that have a high cost of experimentation.
- Environments that are compatible (built on) OpenAI's Gym interface
- Various suites of environments that exhibit common behaviours of real world dynamic systems
truman.agent_registion
interface for managing agents and their hyperparameterstruman.run
interface for running suites of agents on suites of environments and storing performance summaries and full histories
To get started, you'll need to have Python 3.7+ installed. Then:
pip install truman
You can also clone the truman Git repository directly. This is useful when you're working on adding new environments or modifying truman itself. Clone and install in editable mode using:
git clone https://github.com/datavaluepeople/truman
cd truman
pip install -e .
The base framework that environments are built upon is OpenAI’s Gym. Gym is a powerful framework for building environments and developing reinforcement learning algorithms - but Gym's environments are mostly directed towards training agents on problems that can be simulated exactly, e.g. playing an Atari game. Our work at datavaluepeople is often developing reinforcement learning algorithms for making a massive number of optimal decisions simultaneously on high noise and changing environments, e.g. pricing 100,000s of travel products daily, or health intervention decisions for 1,000,000s of humans daily. In such environments, agents should be able to learn quickly and adapt to novel behaviours, since the price of testing algorithms is very high.
Thus the suites of environments in truman are directed towards the goal of large scale optimised decision making on complex systems, and only allow agents a single episode to both learn and optimize on simultaneously.