The pytorch implementation is biubug6/Pytorch_Retinaface, I forked it into wang-xinyu/Pytorch_Retinaface and add genwts.py
This branch is using TensorRT 7 API, branch trt4->retinaface is using TensorRT 4.
- Input shape
INPUT_H
,INPUT_W
defined indecode.h
- INT8/FP16/FP32 can be selected by the macro
USE_FP16
orUSE_INT8
orUSE_FP32
inretina_r50.cpp
- GPU id can be selected by the macro
DEVICE
inretina_r50.cpp
- Batchsize can be selected by the macro
BATCHSIZE
inretina_r50.cpp
The following described how to run retina_r50
. While retina_mnet
is nearly the same, just generate retinaface.wts
with mobilenet0.25_Final.pth
and run retina_mnet
.
- generate retinaface.wts from pytorch implementation https://github.com/wang-xinyu/Pytorch_Retinaface
git clone https://github.com/wang-xinyu/Pytorch_Retinaface.git
// download its weights 'Resnet50_Final.pth', put it in Pytorch_Retinaface/weights
cd Pytorch_Retinaface
python detect.py --save_model
python genwts.py
// a file 'retinaface.wts' will be generated.
- put retinaface.wts into tensorrtx/retinaface, build and run
git clone https://github.com/wang-xinyu/tensorrtx.git
cd tensorrtx/retinaface
// put retinaface.wts here
mkdir build
cd build
cmake ..
make
sudo ./retina_r50 -s // build and serialize model to file i.e. 'retina_r50.engine'
wget https://github.com/Tencent/FaceDetection-DSFD/raw/master/data/worlds-largest-selfie.jpg
sudo ./retina_r50 -d // deserialize model file and run inference.
-
check the images generated, as follows. 0_result.jpg
-
we also provide a python wrapper
// install python-tensorrt, pycuda, etc.
// ensure the retina_r50.engine and libdecodeplugin.so have been built
python retinaface_trt.py
-
Prepare calibration images, you can randomly select 1000s images from your train set. For widerface, you can also download my calibration images
widerface_calib
from GoogleDrive or BaiduPan pwd: a9wh -
unzip it in retinaface/build
-
set the macro
USE_INT8
in retina_r50.cpp and make -
serialize the model and test
Check the readme in home page.