Train an agent to control a robotic arm to continuously track a moving target in the Reacher environment from Unity ML-Agents using a multi-step variant of twin delayed deep deterministic policy gradients (TD3 / DDPG).
Watch a full video of the agent here: https://youtu.be/JC9iwMmjpzo
The Reacher environment in Unity ML-Agents features a double-jointed rigid body arm for tracking a target moving continuously at fast or slow speeds. The agent receives reward points every time step the arm is within the target region. This particular version of the Reacher environment includes 20 robotic arm agents operating at once. Multiple parallel agents can speed up learning. The criteria for solving the task is averaging a score of 30 points across 100 episodes.
The agent observes a state space of 33 variables from the environment, including position, rotation, velocity, and angular velocity. The agent has an action space of four dimensions, which are the torques for both joints.
- Create and activate a Python 3.6 environment. Choose an environment name in place of
my_env
.
conda create -n my_env python=3.6
source activate my_env
- Create an IPython kernel for your new environment.
python -m ipykernel install --user --name my_env --display-name "my_env"
- Clone this repository and install dependencies in the
python/
folder, which comes from the Udacity Deep Reinforcement Learning repository. These dependencies will include PyTorch and Unity ML-Agents Toolkit.
git clone https://github.com/supercurious/deep-rl-continuous-control.git
cd python
pip install .
- Download the Unity environment and unzip the file.
- Open the Jupyter notebook
REPORT.ipynb
for implementation and results.
jupyter notebook REPORT.ipynb
- From the top menu bar, click on "Kernel", navigate to "Change kernel" and select the new environment you created during installation.