-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to use onnx-model? #1163
Comments
Hello @zys1994, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Google Colab Notebook, Docker Image, and GCP Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. |
the result i print i can't understand. |
i have succeed using in openvino,thanks |
Hi @zys1994 , I met the same issue, could you share with us how to do the post-processing after onnx ouput? |
@zys1994 |
@goodtogood @vandesa003 |
@xpngzhng thanks! I have solved the normalization problem and here I provided my code: #1172 (comment) |
hi @vandesa003 , I have modified models.py as you say,boxes vector only x y are same, but w h are different. |
here is part of my code using the onnx model for inference, by further converting it to openvino model I did not modify ultirlytics' code when converting model to onnx, except the input image size and the opset should be 10 not 11 In inference, we need to scale the openvino box output x1x2y1y2 by the image width and height import math
import os
import sys
import time
import cv2
import numpy as np
from openvino.inference_engine import IECore
class InferContext(object):
def __init__(self, model, weights, device_name):
self.ie = IECore()
self.net = self.ie.read_network(model=model, weights=weights)
self.exec_net = self.ie.load_network(network=self.net, device_name=device_name)
self.input_blob_name = next(iter(self.net.inputs))
def infer(self, input):
return self.exec_net.infer(inputs={self.input_blob_name: input})
class YoloV3DetContext(object):
def __init__(self, model, weights, device_name, width, height, conf_thres=0.3, iou_thres=0.6):
self.context = InferContext(model=model, weights=weights, device_name=device_name)
self.width = width
self.height = height
self.conf_thres = conf_thres
self.iou_thres = iou_thres
@staticmethod
def letterbox(img, new_shape=(416, 416), color=(127, 127, 127)):
pass
@staticmethod
def xywh2xyxy(x):
pass
@staticmethod
def compute_iou(rect, rest):
pass
@staticmethod
def non_max_suppression(boxes, confs, conf_thres=0.3, iou_thres=0.6):
pass
@staticmethod
def scale_coords(img1_shape, coords, img0_shape):
pass
@staticmethod
def clip_coords(boxes, img_shape):
pass
def infer(self, image):
img_reshape = YoloV3DetContext.letterbox(image, new_shape=(self.height, self.width))[0]
img = img_reshape[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
img = np.ascontiguousarray(img)
img = img.astype(dtype=np.float32)
img /= 255.0 # 0 - 255 to 0.0 - 1.0
img = np.expand_dims(img, axis=0)
res = self.context.infer(img)
confs = res['Concat_129']
boxes = res['Concat_132']
boxes[:, 0] *= self.width
boxes[:, 1] *= self.height
boxes[:, 2] *= self.width
boxes[:, 3] *= self.height
boxes, confs = self.non_max_suppression(boxes, confs, self.conf_thres, self.iou_thres)
img1_shape = img_reshape.shape[:2]
img0_shape = image.shape[:2]
boxes = self.scale_coords(img1_shape, boxes, img0_shape)
# print(boxes)
return boxes.astype(dtype=np.int32), confs |
hi, could you share your nms code?
thanks a lot!! |
@cendelian you can refer to my python implement. |
@zjd1988 hello everyone! It's great to see the community actively engaging and helping each other out with YOLOv3 and ONNX models. 😊 For those looking for NMS (Non-Maximum Suppression) code, it's a crucial step in object detection to ensure that you only get the best bounding box for each detected object. The NMS function typically takes the bounding boxes and their corresponding confidence scores, filters out boxes with a confidence below a threshold, and then selects the best bounding boxes while suppressing the non-maximal ones based on the IoU (Intersection over Union) threshold. While I can't provide a direct code snippet here, I encourage you to check out the Ultralytics documentation for guidance on post-processing steps, including NMS. You can find detailed explanations and examples that should help you implement NMS correctly in your pipeline. Keep up the great collaboration, and if you have further questions or run into issues, feel free to reach out! 🚀 |
i convert pytorch to onnx,and then convert onnx to openvino.
i get 10674x4 vec ,10674x2 vec from model.
i wonder how to use 10674x4 vec for box, 10674x2 vec for class. some info i print let me confused
The text was updated successfully, but these errors were encountered: