Skip to content

Latest commit

 

History

History
117 lines (83 loc) · 4.36 KB

README.md

File metadata and controls

117 lines (83 loc) · 4.36 KB

YOLOv5_DOTA_OBB

YOLOv5 in DOTA_OBB dataset with CSL_label.(Oriented Object Detection)

Datasets and pretrained checkpoint

Fuction

  • train.py. Train.

  • detect.py. Detect and visualize the detection result. Get the detection result txt.

  • evaluation.py. Merge the detection result and visualize it. Finally evaluate the detector

Installation (Linux and Windows)

This code is created by pure python, and can be uesd for both Windows and Linux easily. 1. Python 3.8 with all requirements.txt dependencies installed, including torch==1.6, opencv-python==4.1.2.30, To install run:

$   pip install -r requirements.txt

More detailed explanation

想要了解相关实现的细节和原理可以看我的知乎文章:

Usage Example

1. 'Get Dataset'

  • Split the DOTA_OBB image and labels. Trans DOTA format to YOLO longside format.

  • You can refer to hukaixuan19970627/DOTA_devkit_YOLO.

  • The Oriented YOLO Longside Format is:

$  classid    x_c   y_c   longside   shortside    Θ    Θ∈[0, 180)


* longside: The longest side of the oriented rectangle.

* shortside: The other side of the oriented rectangle.

* Θ: The angle between the longside and the x-axis(The x-axis rotates clockwise).x轴顺时针旋转遇到最长边所经过的角度

WARNING: IMAGE SIZE MUST MEETS 'HEIGHT = WIDTH'

2. 'train.py'

  • All same as ultralytics/yolov5. You better train demo files first before train your custom dataset.
  • Single GPU training:
$ python train.py  --batch-size 4 --device 0
  • Multi GPU training: DistributedDataParallel Mode
python -m torch.distributed.launch --nproc_per_node 4 train.py --sync-bn --device 0,1,2,3

3. 'detect.py'

  • Download the demo files.
  • Then run the demo. Visualize the detection result and get the result txt files.
$  python detect.py

4. 'evaluation.py'

  • Run the detect.py demo first. Then change the path with yours:
evaluation
(
        detoutput=r'/....../DOTA_demo_view/detection',
        imageset=r'/....../DOTA_demo_view/row_images',
        annopath=r'/....../DOTA_demo_view/row_DOTA_labels/{:s}.txt'
)
draw_DOTA_image
(
        imgsrcpath=r'/...../DOTA_demo_view/row_images',
        imglabelspath=r'/....../DOTA_demo_view/detection/result_txt/result_merged',
        dstpath=r'/....../DOTA_demo_view/detection/merged_drawed'
)
  • Run the evaluation.py demo. Get the evaluation result and visualize the detection result which after merged.
$  python evaluation.py

Model Picture

detection_result_after_merge

感激

感谢以下的项目,排名不分先后

关于作者

  Name  : "杨刚"
  describe myself:"kuazhangxiaoai"