This repo covers the MAX78000 model training and synthesis pipeline for the YOLO v1 model.
-
YOLO_V1_Train_QAT.py: Layer-wise QAT; set args.qat = True to quantize layers 1, 2, ..., 24 are quantized in epoch 100, 200, ..., 2400; set args.fuse = True to fuse BN layers 1, 2, ..., 24 are in epoch 2500, 2600, ..., 4800.
-
YOLO_V1_Test.py: Fake INT8 test of the model; change the directory of weight file (*.pth) to test different models.
-
YOLO_V1_Test_INT8.py: Real INT8 test of the model; no involved in the current stage.
-
YOLO_V1_DataSet_small.py: Preprocess the VOC2007 dataset.
-
yolov1_bn_model.py: Define the structure of the deep neural network.
-
YOLO_V1_LossFunction.py: Define the loss function.
-
weights/YOLO_V1_Z_5_450_Guanchu-BN_bs16_quant1_3000.pth: Model parameter after 3000 epoch training, where args.qat = True and args.fuse = False.
-
Weights/YOLO_V1_Z_5_450_Guanchu-BN_bs16_quant1_4000.pth: Model parameter after 4000 epoch training, where args.qat = True and args.fuse = False.
-
Put this folder ('code') and the 'dataset' folder to 'ai8x-training', where ai8x.py is in the same directory.
-
Run 'python3 YOLO_V1_Train_QAT.py --gpu 0 --qat True --fuse False'.
- You can change the hyperparameter as you want. But there is no need to do this because the current hyperparameters work for our Layer-wise QAT training.
-
Open YOLO_V1_Test.py, revise line 27 into the directory of your trained model.
-
Run YOLO_V1_Test.py. (python3 YOLO_V1_Test.py or using Pycharm)
We intend to focus on the real INT8 testing after the model has passed the Fake INT8 testing. Hence, YOLO_V1_Test_INT8.py, nms.py, and sigmoid.py are useless in the current stage.
-
Open YOLO_V1_Test.py and uncomment lines 14, 15, and 29-36.
-
Run YOLO_V1_Test.py to generate the checkpoint file in directory ./weights/. Then, you can quantize the checkpoint using ai8x-synthesis.
The follow links contains previous trained models and logs.
- You can download train/validation and test by the above hyperlinks.
- Or if you have an Texas A&M account, you might access the VOC2007 on datalab6.engr.tamu.edu
- Train: /data/yiwei/VOC2007/Train
- Test: /data/yiwei/VOC2007/Test
Note that Python >= 3.8
$ git clone git@github.com:YIWEI-CHEN/yolov1_maxim.git
$ cd yolov1_maxim
# note that ai8x-training and ai8x-synthesis should in the project root (e.g., yolov1_maxim)
$ git clone --recursive https://github.com/MaximIntegratedAI/ai8x-training.git
$ git clone --recursive https://github.com/MaximIntegratedAI/ai8x-synthesis.git
# in your virtual environment
# install pytorch for NVIDIA RTX A5000
$ pip install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio==0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
# install distiller
$ cd yolov1_maxim/ai8x-training/distiller
# remove lines of numpy, torch, torchvision in requirements.txt
$ pip install -e .
# install other packages
$ pip install tensorboard matplotlib numpy colorama yamllint onnx PyGithub GitPython opencv-python
- The repo starts from https://www.dropbox.com/s/pssda2gxrqa51v9/yolov1_maxim.zip?dl=0
- The YOLOv1 train and test framework are from https://github.com/ProgrammerZhujinming/YOLOv1
-
Guanchu Wang (gw22@rice.edu)
-
Yi-Wei Chen (yiwei_chen@tamu.edu)