Course metrials and projects of Udacity Sensor Fusion Nanodegree Program.
Lidar Point Clouds Input | 3D Tracked Objects w/ Bounding Boxes |
---|---|
- Contained in the folder lidar_obstacle_detection
- Implemented a 3D lidar object detection, which contains several classical algoirthm:
- Raw Lidar Point Clouds rendering: check the top left gif
- Processed result rendering with bounding boxes: check the top right gif
- Lidar point clouds segmentation using the RANSAC technique (Wiki page)
- Lidar point clouds clustering using kd-tree algorithm (Wiki page)
Type | RANSAC 2D Line Fitting | RANSAC 3D Plane Fitting | Kd-tree Clustering |
---|---|---|---|
Result | |||
Source code | RANSAC 2D source code | RANSAC 3D source code | Kd-tree source code |
Items | Images |
---|---|
TTC Calculation based on 3D Object Detection | |
2D Image Keypoints Detection | |
3D Object Detection via YOLOv3 |
- 2D Feature Tracking
- Implemented classic image feature detection, description as well as matching within
OpenCV
:- Keypoints detectors: implemented based on intensity gradients such as
HARRIS
,SHITOMASI
, etc, as well as Non-maximum Suppression (NMP) technique for clearing overlapping of keypoints - Descriptors: applied
OpenCV
built-in descriptors, including Histograms of Oriented (HoG) based descriptors such asSIFT
andSURF
, as well as Binary Descriptors such asBRISK
,BRISK
,ORB
andAKAZE
. - Descriotpr Matching: implemented manually about
L1
,L2
norms matching, as well as K Nearest Neighbor matching algorithm based on distances / ratios
- Keypoints detectors: implemented based on intensity gradients such as
- Performed analysis on different combinations of detector / descriptor / matching to evaluate overall performance
- Refer to 2D Feature Tracking Page for further details
- Implemented classic image feature detection, description as well as matching within
- 3D Object Tracking
- Applied YOLOv3 object detection with trained model and generated bounding boxes for detected objects
- Implemented 3D lidar point clouds projection on 2D camera image
- Calculated Time-to-Collision (TTC) based on 2D camera image keypoints matching, and projected 3D lidar point clouds on 2D images
- Conducted analysis of performance of camera TTC via various combinations of detector / descriptor
- Refer to 3D Object Tracking Page for further details
Camera based TTC | Lidar based TTC |
---|---|
Source / Input (from 2D FFT ) |
Result / Output of 2D CFAR |
---|---|
- 2D CFAR Algorithm
- Refer to Radar Target Generation and Detection for further details.
- The 2D CFAR algorithm takes the 2D FFT result, i.e., the complete Range Dopper Map (the variable
RDM
in the script), as its input - then the algorithm applies sliding window through the input, and during each iteration it conducts averaging of surrounding cell values of the interested cell (i.e., the Cell Under Test (CUT)), to take as threshold of the CUT.
- Parameter Selection: to achieve an ideal performance of 2D CFAR, the following paramters are set:
Tr |
Td |
Gr |
Gd |
SNR |
---|---|---|---|---|
12 | 14 | 6 | 8 | 5 |
- UKF Implementation
- Refer to Uncented Kalman Filter Traffic Flow Tracking for further details