Skip to content

araji/CarND-Capstone

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self Driving Car Capstone Project - System Integration

Udacity - Self-Driving Car NanoDegree

Overview

The goals of this project are to create ROS nodes to implement core functionality of the autonomous vehicle system, including traffic light detection, control, and waypoint following. Once all the ROS nodes are up & working seamlessly integration code will be tested on a real-life autonomous vehichle, named Carla followed by simlator locally.

alt text

Capstone Video

Team Members

The members of team Mater:

Name Udacity handle Email
Abdelaziz R. Abdelaziz R. abdelaziz.raji@gmail.com
Askari Hasan Hasan Askari hasan_askari79@yahoo.com
Dae Robert Robert Dämbkes robertrd@gmx.de
Neranjaka J.(*) Neranjaka Jayarathne nernajaka.jayarathne@ttu.edu
Prabhakar Rana Prabhakar R prabhakar.rana@gmail.com

(*) Team Lead

System Architecture Diagram :

The following is a system architecture diagram showing the ROS nodes and topics used in the project.

alt text

Here same architecture displayed using ROS graphing tool :

alt text

Code Structure :

1 - Waypoint Update:

Code location : ros/src/waypoint_updater/

This package contains the waypoint updater node: waypoint_updater.py. The purpose of this node is to update the target velocity property of each waypoint based on traffic light and obstacle detection data. This node will subscribe to the /base_waypoints, /current_pose, /obstacle_waypoint, and /traffic_waypoint topics, and publish a list of waypoints ahead of the car with target velocities to the /final_waypoints topic.

alt text

2 - Twist Controller:

Carla is equipped with a drive-by-wire (DBW) system, meaning the throttle, brake, and steering have electronic control. This package contains the files that are responsible for control of the vehicle: the node dbw_node.py and the file twist_controller.py.

alt text

3 - Traffic Light Detection:

This package contains the traffic light detection node: tl_detector.py. This node takes in data from the /image_color, /current_pose, and /base_waypoints topics and publishes the locations to stop for red traffic lights to the /traffic_waypoint topic.

The /current_pose topic provides the vehicle's current position, and /base_waypoints provides a complete list of waypoints the car will be following.

We build both a traffic light detection node and a traffic light classification node. Traffic light detection should take place within tl_detector.py, whereas traffic light classification should take place within ../tl_detector/light_classification_model/tl_classfier.py.

alt text

Simulator :

The team has leveraged the Tensorflow detection model zoo, to train a model for traffic light classification and use it inside tl_classify.py .

Steps

Dataset :

we leveraged two existing dataset (Thanks Alex and Vatsal ) but also added couple of images saved from simulator. The images saved were labelled using labeImg :

alt text

Final Dataset :

       - Training :  917
       - Testing :   277

Setup training environment :

See very detailed steps here) then convert images and annotations to tensorflow record format.

(araji) abdelaziz_raji@carnd:~/workarea/tensorflow/models/research/capstone$ python convert.py --output_path sim_data.record

Transfert learning :

Team has imported pretrained model ssd_mobilnet and expand the tarball and customize the config file to match your environment and dataset:

alt text

Training/monitoring and exporting resulting graph:

start training :
python train.py -logtostderr --train_dir=./models/train --pipeline_config_path=config/ssd_mobilenet_sim.config
start monitoring with tensorboard:
tensorboard --logdir models/train4/ --host 10.142.0.13
TensorBoard 0.4.0 at http://10.142.0.13:6006 (Press CTRL+C to quit)

Tensorboard snapshot during training :

alt text

To export graph ( freeze):

python export_inference_graph.py --input_type image_tensor --pipeline_config_path ./config/ssd_mobilenet_sim.config --       trained_checkpoint_prefix ./models/train/model.ckpt-2000 --output_directory ./fine_tuned_model
Notebook validation:

Using the udacity provided notebook but modified to load our own inference model , we tested a couple of images :

alt text alt text
alt text

Site Training

Site training followed a similar process but faster rcnn network model was selected and trained with around 350 images.

References used :


Original README:

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

Please use one of the two installation options, either native or docker installation.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the "uWebSocketIO Starter Guide" found in the classroom (see Extended Kalman Filter Project lesson).

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

Other library/driver information

Outside of requirements.txt, here is information on other driver/library versions used in the simulator and Carla:

Specific to these libraries, the simulator grader and Carla use the following:

Simulator Carla
Nvidia driver 384.130 384.130
CUDA 8.0.61 8.0.61
cuDNN 6.0.21 6.0.21
TensorRT N/A N/A
OpenCV 3.2.0-dev 2.4.8
OpenMP N/A N/A

We are working on a fix to line up the OpenCV versions between the two.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 99.0%
  • Other 1.0%