Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Match PPG implementation #186

Merged
merged 32 commits into from
May 28, 2022
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
419041d
added nit changes from ppg code
dipamc May 14, 2022
2e1190b
change observation buffer to uint8
dipamc May 14, 2022
86f5be7
sample full rollouts
dipamc May 15, 2022
beff293
minor device fix
dipamc May 15, 2022
4cb85d5
update optimizer settings
dipamc May 16, 2022
d6ee26b
add ppg documentation
May 18, 2022
fea4531
update mkdocs
dipamc May 18, 2022
20f15da
update images to png for codespell errors
dipamc May 18, 2022
6c3cb05
trigger CI
vwxyzjn May 18, 2022
631ab96
Minor format change
vwxyzjn May 18, 2022
d961d0f
format by running `pre-commit`
vwxyzjn May 18, 2022
4cff11d
removes trailing space
vwxyzjn May 18, 2022
fb9c832
Add an extra note
vwxyzjn May 19, 2022
31bb5c4
argument names and documentation changes
dipamc May 23, 2022
ed66604
add capture video
dipamc May 23, 2022
1610191
add experiment report
dipamc May 25, 2022
51c6aac
Merge branch 'master' into ppg-dev
vwxyzjn May 27, 2022
a4342f8
Update documentation
vwxyzjn May 27, 2022
3d4711c
Quick css fix
vwxyzjn May 27, 2022
b780521
Update documentation
vwxyzjn May 27, 2022
9c4edf8
Fix documentation for PPO
vwxyzjn May 27, 2022
23cd48e
Add benchmark commands
vwxyzjn May 27, 2022
8e4f977
Add benchmark commands
vwxyzjn May 27, 2022
72e8cce
add metrics section
dipamc May 27, 2022
aa695c1
Add more docs
vwxyzjn May 27, 2022
0564584
Quick fix on ddpg docs
vwxyzjn May 27, 2022
a08039e
Add procgen test cases
vwxyzjn May 27, 2022
31a175c
Update CI
vwxyzjn May 27, 2022
f063a7b
test CI
vwxyzjn May 27, 2022
60df2c8
test ci
vwxyzjn May 27, 2022
e70c71a
Update tests
vwxyzjn May 27, 2022
6ebaaae
normalization axis documentation
dipamc May 28, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
name: pre-commit

on:
push:
branches: [ master ]
pull_request:
branches: [ '*' ]
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
Expand Down
486 changes: 486 additions & 0 deletions cleanrl/ppg_procgen.py

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions docs/rl-algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,6 @@ Below are the implemented algorithms and their brief descriptions.
- [x] Twin Delayed Deep Deterministic Policy Gradient (TD3)
* [td3_continuous_action.py](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py)
* For continuous action space.
- [x] Phasic Policy Gradient (PPG)
* [ppg_procgen.py](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppg_procgen.py)
* PPG implementation for Procgen
2 changes: 1 addition & 1 deletion docs/rl-algorithms/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,4 @@
| ✅ [Soft Actor-Critic (SAC)](https://arxiv.org/pdf/1812.05905.pdf) | :material-github: [`sac_continuous_action.py`](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py), :material-file-document: [docs](/rl-algorithms/sac/#sac_continuous_actionpy) |
| ✅ [Deep Deterministic Policy Gradient (DDPG)](https://arxiv.org/pdf/1509.02971.pdf) | :material-github: [`ddpg_continuous_action.py`](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py), :material-file-document: [docs](/rl-algorithms/ddpg/#ddpg_continuous_actionpy) |
| ✅ [Twin Delayed Deep Deterministic Policy Gradient (TD3)](https://arxiv.org/pdf/1802.09477.pdf) | :material-github: [`td3_continuous_action.py`](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/td3_continuous_action.py), :material-file-document: [docs](/rl-algorithms/td3/#td3_continuous_actionpy) |

| ✅ [Phasic Policy Gradient (PPG)](https://arxiv.org/abs/2009.04416) | :material-github: [`ppg_procgen.py`](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppg_procgen.py) |
96 changes: 96 additions & 0 deletions docs/rl-algorithms/ppg.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# Phasic Policy Gradient (PPG)

## Overview

PPG is a DRL algorithm that separates policy and value function training by introducing an auxiliary phase. The training proceeds by running PPO during the policy phase, saving all the experience in a replay buffer. Then the replay buffer is used to train the value function. This makes the algorithm considerably slower than PPO, but improves sample efficiency on Procgen benchmark.

Original paper:

* [Phasic Policy Gradient](https://arxiv.org/abs/2009.04416)

Reference resources:

* [Code for the paper "Phasic Policy Gradient"](https://github.com/openai/phasic-policy-gradient) - by original authors from OpenAI

The original code has multiple code level details that are not mentioned in the paper. We found these changes to be important for reproducing the results claimed by the paper.

## Implemented Variants


| Variants Implemented | Description |
| ----------- | ----------- |
| :material-github: [`ppg_procgen.py`](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppg_procgen.py), :material-file-document: [docs](/rl-algorithms/ppg/#ppg_procgenpy) | For classic control tasks like `CartPole-v1`. |

Below are our single-file implementations of PPG:

## `ppg_procgen.py`

`ppg_procgen.py` works with the Procgen benchmark, which uses 64x64 RGB image observations, and discrete actions

### Usage

```bash
poetry install -E procgen
python cleanrl/ppg_procgen.py --help
python cleanrl/ppg_procgen.py --env-id "bigfish"
```

## Implementation details

`ppg_procgen.py` includes the <TODO> level implementation details that are different from PPO:

1. Full rollout sampling during auxiliary phase - (:material-github: [phasic_policy_gradient/ppg.py#L173](https://github.com/openai/phasic-policy-gradient/blob/c789b00be58aa704f7223b6fc8cd28a5aaa2e101/phasic_policy_gradient/ppg.py#L173)) - Instead of randomly sampling observations over the entire auxiliary buffer, PPG samples full rullouts from the buffer (Sets of 256 steps). This full rollout sampling is only done during the auxiliary phase. Note that the rollouts will still be at random starting points because PPO truncates the rollouts per env. This change gives a decent performance boost.

1. Batch level advantage normalization - PPG normalizes the full batch of advantage values before PPO updates instead of advantage normalization on each minibatch. (:material-github: [phasic_policy_gradient/ppo.py#L70](https://github.com/openai/phasic-policy-gradient/blob/c789b00be58aa704f7223b6fc8cd28a5aaa2e101/phasic_policy_gradient/ppo.py#L70))

1. Normalized network initialization - (:material-github: [phasic_policy_gradient/impala_cnn.py#L64](https://github.com/openai/phasic-policy-gradient/blob/c789b00be58aa704f7223b6fc8cd28a5aaa2e101/phasic_policy_gradient/impala_cnn.py#L64)) - PPG uses normalized initialization for all layers, with different scales.
* Original PPO used orthogonal initialization of only the Policy head and Value heads with scale of 0.01 and 1. respectively.
* For PPG
* All weights are initialized with the default torch initialization (Kaiming Uniform)
* Each layer’s weights are divided by the L2 norm of the weights along the (which axis?), and multiplied by a scale factor.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please clarify "which axis" here.

* Scale factors for different layers
* Value head, Policy head, Auxiliary value head - 0.1
* Fully connected layer after last conv later - 1.4
* Convolutional layers - Approximately 0.638
1. The Adam Optimizer's Epsilon Parameter -(:material-github: [phasic_policy_gradient/ppg.py#L239](https://github.com/openai/phasic-policy-gradient/blob/c789b00be58aa704f7223b6fc8cd28a5aaa2e101/phasic_policy_gradient/ppg.py#L239)) - Set to torch default of 1e-8 instead of 1e-5 which is used in PPO.

### Extra notes

- All the default hyperparameters from the original PPG implementation are used. Except setting 64 for the number of environments.
- The original PPG paper does not report results on easy environments, hence more hyperparameter tuning can give better results.
- Skipping every alternate auxiliary phase gives similar performance on easy environments while saving compute.
- Normalized network initialization scheme seems to matter a lot, but using layernorm with orthogonal initialization also works.
- Using mixed precision for auxiliary phase also works well to save compute, but using on policy phase makes training unstable.


### Differences from the original PPG code

- The original PPG code supports LSTM whereas the CleanRL code does not.
- The original PPG code uses separate optimizers for policy and auxiliary phase, but we do not implement this as we found it to not make too much difference.
- The original PPG code utilizes multiple GPUs but our implementation does not


### Experiment results

Below are the average episodic returns for `ppg_procgen.py`, and comparison with `ppg_procgen.py` on 25M timesteps.

| Environment | `ppg_procgen.py` | `ppg_procgen.py` |
| ----------- | ----------- | ----------- |
| Bigfish (easy) | 27.670 ± 9.523 | 21.605 ± 7.996 |
| Starpilot (easy) | 39.086 ± 11.042 | 34.025 ± 12.535 |

Learning curves:

<div class="grid-container">

<img src="../ppg/bigfish-easy-ppg-ppo.png">

<img src="../ppg/starpilot-easy-ppg-ppo.png">

<img src="../ppg/bossfight-easy-ppg-ppo.png">

</div>

Tracked experiments and game play videos:

Please check this [wandb report](https://wandb.ai/openrlbenchmark/cleanrl/reports/CleanRL-PPG-vs-PPO-results--VmlldzoyMDY2NzQ5) for tracked results.
Binary file added docs/rl-algorithms/ppg/bigfish-easy-ppg-ppo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/rl-algorithms/ppg/bossfight-easy-ppg-ppo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/rl-algorithms/ppg/starpilot-easy-ppg-ppo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ nav:
- rl-algorithms/ddpg.md
- rl-algorithms/sac.md
- rl-algorithms/td3.md
- rl-algorithms/ppg.md
- Open RL Benchmark: open-rl-benchmark.md
- Advanced:
- advanced/resume-training.md
Expand Down