In this project the task is to train a deep learning algorithm to autonomously navigate a real car around a realistic test circuit, and make the appropriate manoeuvres where necessary. At the end of the project, you are expected to give a presentation and write a report about what you have done. Your model will be tested on the track and will compete against the models of your peers.
- Work in pairs
- Develop a deep learning model
- Input = Image from a camera on the car
- Predictions = Appropriate speed and steering angle
- Dataset is hosted on Kaggle
- 13.8k images
- Spped & Steering angle is also available in the dataset
- We are free to generate our own dataset
- This will allow you to automate the process of model submission, and obtain an indication of performance (using a small set of test data), before we evaluate them on the final, unseen data.
- Kaggle competition is hosted here.
- Create a Kaggle account (if you do not have one) and form a team with your project partner.
- A live challenge, where your pre-trained model will be deployed to the car and tested on real circuits. This will be performed in person.
- The main body of the car is the SunFounder PiCar-V kit V2 and is equipped with a Raspberry Pi (RPi)
- TensorFlow v2.4 is installed on the car,
- The car has an optional Coral Edge TPU, which is a custom device to run forward-pass operations for edge computing.
- Note that it isn’t necessary to convert your model to TensorFlow Lite.
A standardised skeleton code will be provided to you that you should integrate your pre-trained model with, which we will then install on the car prior to the live testing.
- We can use Google Colab or our own local machine to train the model
- We will also have access to MLiS1 or MLiS2 machines (each with w) to perform training. These are accessible by ssh’ing into the machine, by typing
- ssh username@mlis1.nottingham.ac.uk or
- ssh username@mlis2.nottingham.ac.uk, where username is your University username.
- In order to install custom packages on your machine, you will need to set up a conda environment. To install conda, type the following command
- bash /shared/Anaconda3-2019.10-Linux-x86 64.sh
- Once installed, you will need to add a start up script
- echo . ∼/.bashrc >> .profile
- Lastly to create your conda environment use
- conda create --name my env python=3.6
- T-junction tracks
- Oval Track
- Figure-of-eight track.
Important: we only use UK driving rules, i.e. driving on the left-hand side. The training data was based on the following driving scenarios:
- Keeping in lane driving along the straight section of the T-junction track.
- As (1), but stopping if a pedestrian is in the road.
- As (1), but driving as normal if pedestrians or other objects are on the side of (but not in) the road.
- Driving around the oval track in both directions.
- As (4), but stopping if a pedestrian is in the road.
- As (4), but driving as normal if pedestrians or other objects are on the side of (but not in) the road.
- Performing a turn at the T-junction, in response to a traffic sign (either left or right).
- Driving around the figure-of-eight track in both directions, continuing straight at the intersection. We will not consider objects in or at the side of the road for this scenario.
- Stopping at a red traffic light and continuing at a green traffic light. We will only consider these scenarios in the live testing.
- angle.ipynb calculates the angle while driving through the different tracks
- speed.ipynb calculates the speed while driving determining when to stop and accelerate