Skip to content

USC-InfoLab/rddc2020

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

rddc2020

road damage detection challenge 2020

road damage detection challange 2020 IMSC submission

This repository contains source code and trained models for Road Damage Detection and Classification Challenge that was held as part of 2020 IEEE Big Data conference.

The best model achieved mean F1-score of 0.674878682854973 on test1 and 0.666213894130645 on test2 dataset of the competition.

Sample predictions:

Table of contents

Prerequisites

You need to install:

  • Python3 >= 3.6

  • Use requirements.txt to install required python dependencies

    # Python >= 3.6 is needed
    pip3 install -r requirements.txt

Quick-start

  1. Clone the road-damage-detection repo into $RDD:

    git clone https://github.com/USC-InfoLab/rddc2020.git
  2. Install python packages:

    pip3 install -r requirements.txt

RDCC Dataset Setup for YOLOv5

NOTE: Entire process (step 1-4 explained in this section) of downloading and preparing GRDDC 2020 dataset can be done by executing yolov5/scripts/dataset_setup_for_yolov5.sh

    bash yolov5/scripts/dataset_setup_for_yolov5.sh

OR

  1. Go to yolov5 directory

    cd yolov5
  2. execute download_road2020.sh to downlaod train and test dataset

    bash scripts/download_road2020.sh
  3. Detection: strcutre test datasets for inference using yolov5

    bash scripts/prepare_test.sh
  4. Training: Generate the label files for yolov5 using scripts/xml2Yolo.py

    python3 scripts/xml2yolo.py
    • Use python3 scripts/xml2Yolo.py --help for command line option details

IMSC YOLOv5 Model zoo

  1. Go to yolov5 directory

    cd yolov5
  2. download YOLOv5 model zoo:

    bash scripts/download_IMSC_grddc2020_weights.sh

Detection / Submission

  1. Download weights as mentioned in IMSC YOLOv5 Model zoo

  2. Go to yolov5 directory

    cd yolov5
  3. Execute one of the follwoing commands to generate results.csv(competition format) and predicated images under inference/output/:

    # inference using best ensemble model for test1 dataset
    python3 detect.py --weights weights/IMSC/last_95_448_32_aug2.pt weights/IMSC/last_95_640_16.pt weights/IMSC/last_120_640_32_aug2.pt --img 640 --source datasets/road2020/test1/test_images/ --conf-thres 0.22 --iou-thres 0.9999 --agnostic-nms --augment
    # inference using best ensemble model for test2 dataset
    python3 detect.py --weights weights/IMSC/last_95_448_32_aug2.pt  weights/IMSC/last_95_640_16.pt  weights/IMSC/last_120_640_32_aug2.pt weights/IMSC/last_100_100_640_16.pt --img 640 --source datasets/road2020/test2/test_images/ --conf-thres 0.22 --iou-thres 0.9999 --agnostic-nms --augment
    # inference using best non-ensemble model for test1 dataset
    python3 detect.py --weights weights/IMSC/last_95.pt --img 640 --source datasets/road2020/test1/test_images/ --conf-thres 0.20 --iou-thres 0.9999  --agnostic-nms --augment
    # inference using best non-ensemble model for test2 dataset
    python3 detect.py --weights weights/IMSC/last_95.pt --img 640 --source datasets/road2020/test2/test_images/ --conf-thres 0.20 --iou-thres 0.9999  --agnostic-nms --augment

Performance on RDDC test datasets

YOLOv5x_448_32_aug2 YOLOv5x_640_16_95 YOLOv5x_640_16_100 YOLOv5x_640_32 YOLOv5x_640_16_aug2 YOLOv5x_640_32_aug2 test1 F1-score test2 F1-score
✔️ 0.66697383879131 0.651389430313506
✔️ ✔️ ✔️ 0.674878682854973 0.665632401648316
✔️ ✔️ ✔️ ✔️ 0.674198239966431 0.666213894130645

Training

  1. download pre-trained weights from yolov5 repo

    bash weights/download_weights.sh
  2. run following command

    python3 train.py --data data/road.yaml --cfg models/yolov5x.yaml --weights weight/yolov5x.pt --batch-size 64

visit yolov5 official source code for more training and inference time arguments

About

road damage detection challenge 2020

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •