Skip to content

YOLOv5 Torch

Aditya Lohia edited this page Jun 8, 2022 · 1 revision

This guide explains how to run YOLOv5 with torch backend.


(Supports every device and platform except TPU, Full Dynamic Support)

You can use Torch powered detector by specifying the backend parameter.

from cvu.detector import Detector
detector = Detector(classes="coco", backend = "torch"))

Internally, the Detector will load Torchscript (JIT) pretrained Yolov5s weight model.

If you want to run the detector for your custom weights, simply do the following:

Make sure your model is on the correct device (CUDA can save torchscript in Float16 format which is unavailable/inefficient in many CPUs) while exporting your custom weights. It's recommended to add --half flag for CUDA.

python export.py --weights $PATH_TO_PYTORCH_WEIGHTS --include torchscript

Now simply set parameter weight="path_to_custom_weights.pt in the Detector initialization, and you're ready for inference.

Clone this wiki locally