In this project, an agent learns to control a double-jointed arm to follow the target locations (Reacher environment). A reward of +0.1 is provided for each step that the agent's hand is in the target location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible. The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector is a number between -1 and 1. The task is episodic, and in order to solve the environment, an agent must get an average score of +30 over 100 consecutive episodes.
Setup the dependencies as described here.
Download the environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
For Windows users, check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system. If you'd like to train the agent on AWS and have not enabled a virtual screen), then please use this link (version 1) or this link (version 2) to obtain the "headless" version of the environment. You will not be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (To watch the agent, you should follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)
Clone the repository and unpack the environment file in the project folder.
To train the agent run ddpg.py
Description of the implementation is provided in Report.md. For technical details see the code.
Actor and critic model weights are stored in actor.pth and critic.pth, respectively.