Skip to content

Latest commit

 

History

History
111 lines (100 loc) · 3.63 KB

README.md

File metadata and controls

111 lines (100 loc) · 3.63 KB

Reproduce MADDPG with PARL

Based on PARL, the MADDPG algorithm of deep reinforcement learning has been reproduced.

Paper: MADDPG in Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments

Multi-agent particle environment introduction

A simple multi-agent particle world based on gym. Please see here to install and know more about the environment.

Benchmark result

Mean episode reward (every 1000 episodes) in training process (totally 25000 episodes).

simple
MADDPG_simple
simple_adversary
MADDPG_simple_adversary
simple_push
MADDPG_simple_push
simple_reference
MADDPG_simple_reference
simple_speaker_listener
MADDPG_simple_speaker_listener
simple_spread
MADDPG_simple_spread
simple_tag
MADDPG_simple_tag
simple_world_comm
MADDPG_simple_world_comm

Experiments result

Display after 25000 episodes.

simple
MADDPG_simple
simple_adversary
MADDPG_simple_adversary
simple_push
MADDPG_simple_push
simple_reference
MADDPG_simple_reference
simple_speaker_listener
MADDPG_simple_speaker_listener
simple_spread
MADDPG_simple_spread
simple_tag
MADDPG_simple_tag
simple_world_comm
MADDPG_simple_world_comm

How to use

Dependencies:

Start Training:

# To train an agent for simple_speaker_listener scenario
python train.py

# To train for other scenario, model is automatically saved every 1000 episodes
# python train.py --env [ENV_NAME]

# To show animation effects after training
# python train.py --env [ENV_NAME] --show --restore