Crowdsensing-based Road Damage Detection Challenge (CRDDC2022)
This repository contains source code and trained models fo Crowdsensing-based Road Damage Detection Challenge that was held as part of 2022 IEEE Big Data conference.
The following are the results of our group in this competition
all countries | India | Japan | Norway | US | Mean-F1 Score | |
---|---|---|---|---|---|---|
F1-Score | 0.694 | 0.519 | 0.773 | 0.464 | 0.727 | 0.635 |
Sample predictions:
D00 | D10 |
---|---|
D20 | D40 |
4.Training
-
Clone the road-damage-detection repo into your path:
git clone https://github.com/hehualin-tut/YPLNet.git
You need to install:
-
Use
requirements.txt
to install required python dependencies# Python >= 3.6 is needed cd yolov5-master pip3 install -r requirements.txt
-Note:The experiment is carried out on a TeslaV100 of the Ubuntu18.04 system, If you get a ''CUDA Out of Memory'' error during the training phase, you can try to reduce the --img-size
and --batch-size
.
1.Execute one of the follwoing commands to generate results.csv
(competition format) and predicated images under inference/output/
python3 detect.py --weights weights/all/32-1280-140.pt --img 1280 --source [your datasets path] --conf-thres 0.09 --iou-thres 0.9999 --agnostic-nms --augment
python3 detect.py --weights weights/India/32-1280-140.pt --img 1280 --source [your datasets path] --conf-thres 0.07 --iou-thres 0.9999 --agnostic-nms --augment
python3 detect.py --weights weights/Japan/32-1024-150.pt --img 1024 --source [your datasets path] --conf-thres 0.14 --iou-thres 0.9999 --agnostic-nms --augment
python3 detect.py --weights weights/Norway/32-1280-138.pt --img 1280 --source [your datasets path] --conf-thres 0.10 --iou-thres 0.9999 --agnostic-nms --augment
python3 detect.py --weights weights/US/32-1280-140.pt --img 1280 --source [your datasets path] --conf-thres 0.11 --iou-thres 0.9999 --agnostic-nms --augment
First you need to modify the paths of the training and test sets in data/rdd4.yaml to match your own. For the division of training and test sets and the normalization of labels, refer to train_test_split.py
and data_normalization.py
.
For all countries, India, Norway and US, we use weights under different epochs in a training because it saves time. you could run:
python3 train.py --data data/rdd4.yaml --cfg models/yolov5s-psalcfi.yaml --batch-size 32 --img-size 1280 --epoch 140
Then, the weights under runs/train/[your project] are processed using the optimizer. Before that, you need to change the home_dir
and sub_dir
in the optimizer.py
to the paths under your own project. Then, you could run this:
python3 optimizer.py
For my own inference results, the epochs corresponding to all countries, India, Norway and US are 140, 140, 138, and 140, respectively.
For Japan, we used data augmentation for the D40 class to get better results. you could run:
python3 train.py --data data/rdd4.yaml --cfg models/yolov5s-psalcfi.yaml --batch-size 32 --img-size 1024 --epoch 150
The weights obtained after the completion of the training also need to be processed using optimizer.
This project is based on yolov5,visit yolov5 official source code for more training and inference time arguments. For data augmentation, visit imgaug to get more details.