Examples Support --use_trt the following models support --use_trt, which means you can use TensorRT to accelerate inference at Cuda 10.1 or higher. imagenet ResNet50/ResNet101 detection faster_rcnn/yolov3/pp-yolo/ttf-net