This repository consists of my implementation of the first project from Udacity's Deep Reinforcement Learning Nanodegree course.
This project consists of a reinforcement learning agent which solves the Banana navigation environment based on Unity machine learning problems. See Unity ML-agents for more details.
The environment is considered solved when the agent receives a reward of +13 over 100 consecutive episodes. A reward of +1 is received whenever the agent collects a yellow banana and -1 whenever the agent collects a blue banana.
There are four possible actions:
- forward
- backward
- turn left
- turn right
The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around the agent's forward direction. This information is used to determine the best actions.
The following section was taken from Udacity's GitHub for its simple guide on how to get your local machine ready for reinforcement learning with Unity.
To set up your python environment to run the code in this repository, follow the instructions below.
-
Create (and activate) a new environment with Python 3.6.
- Linux or Mac:
conda create --name drlnd python=3.6 source activate drlnd
- Windows:
conda create --name drlnd python=3.6 activate drlnd
-
Follow the instructions in this repository to perform a minimal install of OpenAI gym.
-
Clone the repository (if you haven't already!), and navigate to the
python/
folder. Then, install several dependencies.
git clone https://github.com/udacity/deep-reinforcement-learning.git
cd deep-reinforcement-learning/python
pip install .
- Create an IPython kernel for the
drlnd
environment.
python -m ipykernel install --user --name drlnd --display-name "drlnd"
- Before running code in a notebook, change the kernel to match the
drlnd
environment by using the drop-downKernel
menu.
To run this project on your local machine, clone this repository.
git clone https://github.com/smejak/Udacity-DeepRL-Nanodegree-P1-Navigation.git
Then, open the Navigation.ipynb file and follow the instructions to train your own agent.