In this project, you can enjoy:
- a new version of yolov1
This is a a new version of YOLOv1 built by PyTorch:
- Backbone: resnet18
- Head: SPP, SAM
- Batchsize: 32
- Base lr: 1e-3
- Max epoch: 160
- LRstep: 60, 90
- optimizer: SGD
Before I tell you how to use this project, I must say one important thing about difference between origin yolo-v2 and mine:
- For data augmentation, I copy the augmentation codes from the https://github.com/amdegroot/ssd.pytorch which is a superb project reproducing the SSD. If anyone is interested in SSD, just clone it to learn !(Don't forget to star it !)
So I don't write data augmentation by myself. I'm a little lazy~~
My loss function and groundtruth creator both in the tools.py
, and you can try to change any parameters to improve the model.
Environment:
- Python3.6, opencv-python, PyTorch1.1.0, CUDA10.0,cudnn7.5
- For training: Intel i9-9940k, TITAN-RTX-24g
- For inference: Intel i5-6300H, GTX-1060-3g
VOC:
size | mAP | FPS | |
VOC07 test | 320 | 64.4 | - |
VOC07 test | 416 | 68.5 | - |
VOC07 test | 608 | 71.5 | - |
COCO:
size | AP | AP50 | |
COCO val | 320 | 14.50 | 30.15 |
COCO val | 416 | 17.34 | 35.28 |
COCO val | 608 | 19.90 | 39.27 |
- Pytorch-gpu 1.1.0/1.2.0/1.3.0
- Tensorboard 1.14.
- opencv-python, python3.6/3.7
As for now, I only train and test on PASCAL VOC2007 and 2012.
I copy the download files from the following excellent project: https://github.com/amdegroot/ssd.pytorch
I have uploaded the VOC2007 and VOC2012 to BaiDuYunDisk, so for researchers in China, you can download them from BaiDuYunDisk:
Link:https://pan.baidu.com/s/1tYPGCYGyC0wjpC97H-zzMQ
Password:4la9
You will get a VOCdevkit.zip
, then what you need to do is just to unzip it and put it into data/
. After that, the whole path to VOC dataset is:
data/VOCdevkit/VOC2007
data/VOCdevkit/VOC2012
.
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>
I copy the download files from the following excellent project: https://github.com/DeNA/PyTorch_YOLOv3
Just run sh data/scripts/COCO2017.sh
. You will get COCO train2017, val2017, test2017:
data/COCO/annotations/
data/COCO/train2017/
data/COCO/val2017/
data/COCO/test2017/
python train.py -d voc --cuda -v [select a model] -ms
You can run python train.py -h
to check all optional argument.
python train.py -d coco --cuda -v [select a model] -ms
python test.py -d voc --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]
python test.py -d coco-val --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]
python eval.py -d voc --cuda -v [select a model] --train_model [ Please input the path to model dir. ]
To run on COCO_val:
python eval.py -d coco-val --cuda -v [select a model] --train_model [ Please input the path to model dir. ]
To run on COCO_test-dev(You must be sure that you have downloaded test2017):
python eval.py -d coco-test --cuda -v [select a model] --train_model [ Please input the path to model dir. ]
You will get a .json file which can be evaluated on COCO test server.