The Pytorch implementation is ultralytics/yolov8.
The tensorrt code is derived from xiaocao-tian/yolov8_tensorrt
- TensorRT 8.0+
- OpenCV 3.4.0+
Currently, we support yolov8
- For yolov8 , download .pt from https://github.com/ultralytics/assets/releases, then follow how-to-run in current page.
- Choose the model n/s/m/l/x from command line arguments.
- Check more configs in include/config.h
- generate .wts from pytorch with .pt, or download .wts from model zoo
// download https://github.com/ultralytics/assets/releases/yolov8n.pt
cp {tensorrtx}/yolov8/gen_wts.py {ultralytics}/ultralytics
cd {ultralytics}/ultralytics
python gen_wts.py -w yolov8n.pt -o yolov8n.wts -t detect
// a file 'yolov8n.wts' will be generated.
- build tensorrtx/yolov8 and run
cd {tensorrtx}/yolov8/
// update kNumClass in config.h if your model is trained on custom dataset
mkdir build
cd build
cp {ultralytics}/ultralytics/yolov8.wts {tensorrtx}/yolov8/build
cmake ..
make
sudo ./yolov8_det -s [.wts] [.engine] [n/s/m/l/x] // serialize model to plan file
sudo ./yolov8_det -d [.engine] [image folder] [c/g] // deserialize and run inference, the images in [image folder] will be processed.
// For example yolov8
sudo ./yolov8_det -s yolov8n.wts yolov8.engine n
sudo ./yolov8_det -d yolov8n.engine ../images c //cpu postprocess
sudo ./yolov8_det -d yolov8n.engine ../images g //gpu postprocess
# Build and serialize TensorRT engine
./yolov8_seg -s yolov8s-seg.wts yolov8s-seg.engine s
# Download the labels file
wget -O coco.txt https://raw.githubusercontent.com/amikelive/coco-labels/master/coco-labels-2014_2017.txt
# Run inference with labels file
./yolov8_seg -d yolov8s-seg.engine ../images c coco.txt
cd {tensorrtx}/yolov8/
// Download inference images
wget https://github.com/lindsayshuo/infer_pic/blob/main/1709970363.6990473rescls.jpg
mkdir samples
cp -r 1709970363.6990473rescls.jpg samples
// Download ImageNet labels
wget https://github.com/joannzhang00/ImageNet-dataset-classes-labels/blob/main/imagenet_classes.txt
// update kClsNumClass in config.h if your model is trained on custom dataset
mkdir build
cd build
cp {ultralytics}/ultralytics/yolov8n-cls.wts {tensorrtx}/yolov8/build
cmake ..
make
sudo ./yolov8_cls -s [.wts] [.engine] [n/s/m/l/x] // serialize model to plan file
sudo ./yolov8_cls -d [.engine] [image folder] // deserialize and run inference, the images in [image folder] will be processed.
// For example yolov8n
sudo ./yolov8_cls -s yolov8n-cls.wts yolov8-cls.engine n
sudo ./yolov8_cls -d yolov8n-cls.engine ../samples
- optional, load and run the tensorrt model in python
// install python-tensorrt, pycuda, etc.
// ensure the yolov8n.engine and libmyplugins.so have been built
python yolov8_det.py # Detection
python yolov8_seg.py # Segmentation
python yolov8_cls.py # Classification
-
Prepare calibration images, you can randomly select 1000s images from your train set. For coco, you can also download my calibration images
coco_calib
from GoogleDrive or BaiduPan pwd: a9wh -
unzip it in yolov8/build
-
set the macro
USE_INT8
in config.h and make -
serialize the model and test
See the readme in home page.