-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to convert the outputs of the yolov5.onnx to boxes ,labels and scores . #708
Comments
I think you should look for the output from the non_max_suppression, which is called 'pred' in detect.py. It has the form of (x1, y1, x2, y2, conf, cls). You can arrange the elements in it in whatever way you like and write it into txt or json. |
Thanks.I found the method too, maybe I'll recode it in c++ because of using onnx runtime c++ version. |
hello, @JiaoPaner , @NosremeC I also want to do the same work to use onnnx runtime c++ version, but I met some problmes with Detect in yolo.py. After I set self.training==False, I don't know why I still get a output of x but not (torch.cat(z, 1), x). this is some code in yolo.py:
|
@yongjingli you can go to see #343, this issue solved my problem.I recoded the non_max_suppression in yolov5/utils/general.py into c++ version with yolov5s.onnx (in export.py ,I set model.model[-1].export = False). The main output analysis code as follows: float* output = output_tensor[0].GetTensorMutableData<float>(); // output of onnx runtime ->>> 1,25200,85
size_t size = output_tensor[0].GetTensorTypeAndShapeInfo().GetElementCount(); // 1x25200x85=2142000
int dimensions = 85; // 0,1,2,3 ->box,4->confidence,5-85 -> coco classes confidence
int rows = size / dimensions; //25200
int confidenceIndex = 4;
int labelStartIndex = 5;
float modelWidth = 640.0;
float modelHeight = 640.0;
float xGain = modelWidth / image.width;
float yGain = modelHeight / image.height;
std::vector<cv::Vec4f> locations;
std::vector<int> labels;
std::vector<float> confidences;
std::vector<cv::Rect> src_rects;
std::vector<cv::Rect> res_rects;
std::vector<int> res_indexs;
cv::Rect rect;
cv::Vec4f location;
for (int i = 0; i < rows; ++i) {
int index = i * dimensions;
if(output[index+confidenceIndex] <= 0.4f) continue;
for (int j = labelStartIndex; j < dimensions; ++j) {
output[index+j] = output[index+j] * output[index+confidenceIndex];
}
for (int k = labelStartIndex; k < dimensions; ++k) {
if(output[index+k] <= 0.5f) continue;
location[0] = (output[index] - output[index+2] / 2) / xGain;//top left x
location[1] = (output[index + 1] - output[index+3] / 2) / yGain;//top left y
location[2] = (output[index] + output[index+2] / 2) / xGain;//bottom right x
location[3] = (output[index + 1] + output[index+3] / 2) / yGain;//bottom right y
locations.emplace_back(location);
rect = cv::Rect(location[0], location[1],
location[2] - location[0], location[3] - location[1]);
src_rects.push_back(rect);
labels.emplace_back(k-labelStartIndex);
confidences.emplace_back(output[index+k]);
}
}
utils::nms(src_rects,res_rects,res_indexs);
cJSON *result = cJSON_CreateObject(), *items = cJSON_CreateArray();
for (int i = 0; i < res_indexs.size(); ++i) {
cJSON *item = cJSON_CreateObject();
int index = res_indexs[i];
cJSON_AddStringToObject(item, "label", classes[labels[index]].c_str());
cJSON_AddNumberToObject(item,"score",confidences[index]);
cJSON *location = cJSON_CreateObject();
cJSON_AddNumberToObject(location,"x",locations[index][0]);
cJSON_AddNumberToObject(location,"y",locations[index][1]);
cJSON_AddNumberToObject(location,"width",locations[index][2] - locations[index][0]);
cJSON_AddNumberToObject(location,"height",locations[index][3] - locations[index][1]);
cJSON_AddItemToObject(item,"location",location);
cJSON_AddItemToArray(items,item);
}
cJSON_AddNumberToObject(result, "code", 0);
cJSON_AddStringToObject(result, "msg", "success");
cJSON_AddItemToObject(result, "data", items);
char *resultJson = cJSON_PrintUnformatted(result);
return resultJson; void utils::nms(const std::vector<cv::Rect> &srcRects, std::vector<cv::Rect> &resRects, std::vector<int> &resIndexs,float thresh) {
resRects.clear();
const size_t size = srcRects.size();
if (!size) return;
// Sort the bounding boxes by the bottom - right y - coordinate of the bounding box
std::multimap<int, size_t> idxs;
for (size_t i = 0; i < size; ++i){
idxs.insert(std::pair<int, size_t>(srcRects[i].br().y, i));
}
// keep looping while some indexes still remain in the indexes list
while (idxs.size() > 0){
// grab the last rectangle
auto lastElem = --std::end(idxs);
const cv::Rect& last = srcRects[lastElem->second];
resIndexs.push_back(lastElem->second);
resRects.push_back(last);
idxs.erase(lastElem);
for (auto pos = std::begin(idxs); pos != std::end(idxs); ){
// grab the current rectangle
const cv::Rect& current = srcRects[pos->second];
float intArea = (last & current).area();
float unionArea = last.area() + current.area() - intArea;
float overlap = intArea / unionArea;
// if there is sufficient overlap, suppress the current bounding box
if (overlap > thresh) pos = idxs.erase(pos);
else ++pos;
}
}
} |
@JiaoPaner If I need C# version, Is there a C# version available? Thanks a lot. |
@ricklina90 you just recode above c++ code into c# code. |
@JiaoPaner After I recode c++ code into c# code, It works fine. Thanks you. |
is there a way to reshape this to [255, 20, 20],etc? |
My onnx session outputs (1, 25200, 11) but non_max_suppression outputs |
@JonathanLehner 300 means the number of boxes detected, 6 is the center_x, center_y, w, h, score, cls_id |
@JiaoPaner I have couple of queries regarding your c++ implementation. a) You are considering only 1 output layer when there are 3 in total. Does considering the bounding boxes from output layer having smallest stride is sufficient? b) In your box calculation you haven't used any sigmoid function, anchor or stride length. How are you getting the box dimension correctly? |
@kafan1986 name: output name: 404 name: 687 name: 970 here are 4 outputs,but we need only first output which name is "output". you needn't use any sigmoid function anymore. |
Hi, have you managed to export onnx model? I tried to do "torch.onnx.export(model,img,"yolos.onnx")" but I got error "Exporting the operator hardswish to ONNX opset version 9 is not supported. Please open a bug to request ONNX export support for the missing operator." I am stuck with this problem for a while. |
@Jiang15 set opset_version=12 . |
@JiaoPaner Hello, in your way, I modified the export.py of the yolov5 project and modified model.model[-1].export = False. Using the above C++ code, I found that the confidence of the detected results is very low, the classic dog The confidence of the results detected by .jpg are all between 0.4 and 0.8. What is this situation? |
@pangchao-git You can see my repo https://github.com/JiaoPaner/detector-onnx-linux.git (based on yolov5 3.0), it work. BTW,before you convert pt model trained to onnx, you must modify the two files in yolov5: export.py: model.model[-1].export = False yolo.py class Detect(nn.Module):
stride = None # strides computed during build
export = True # onnx export
def forward(self, x):
# x = x.copy() # for profiling
z = [] # inference output
#self.training |= self.export
if (self.training is True) & (self.export is True):
self.training = False
...
|
I re-transformed the onnx model according to your prompts, tried to run your project, and found that the post-processing to obtain the confidence level seems to be logically problematic, resulting in the confidence level of the output result is not right, I want to consult your post-processing based on what written |
@pangchao-git My project based on yolov5 3.0, I fixed some bug in yesterday, the post-processing to obtain the confidence level method is referenced to the non_max_suppression method in yolov5/utils/general.py, my c++ skill is in the normal ,so if you found some error,please tell me . |
You have some bug in preprocessing. if you look at detect.py in python code, the picture that is passed to the model is not resized like you did in utils::createInputImage. you should fill image with black border. for example I used following code for (480, 640) input: cv::copyMakeBorder(image, dst, 0, 160, 0, 0, cv::BORDER_CONSTANT); and then you should change some variables like xGain and yGain in detector.cp to 1. |
@JiaoPaner How to interpret the following outputs? name: boxes name: 444 |
@abdulw976 ONNX inference is very easy:
|
@JonathanLehner did you able to solve the issue? |
❔Question
Hi buddy ,can you help me to explain the outputs of the onnx model ? I don't know how to convert the outputs to boxes ,labels and scores .
I use netron to display this onnx model .
outputs:
name: classes
type: float32[1,3,80,80,85]
name: boxes
type: float32[1,3,40,40,85]
name: 444
type: float32[1,3,20,20,85]
why the type are five dimensions? how to convert them to detection task result ?
thanks.
The text was updated successfully, but these errors were encountered: