-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
still working for the last version of yolov5? #6
Comments
I've noticed the same phenomenon. I have a pre-trained model that works properly if consumed using something like detect.py. But when I package it according to the instructions in this repository and use TorchServe, I consistently receive empty detections. |
Hello, If you find a fix for this, please share it or make a PR here for the other people that may have the same issue. |
Sorry for not taking more time to invest in creating a full PR, but I've at least figured out a workaround that works on my end that I can share. Also, as I was unable to get your handler working on my machine, I don't know what a successful response (from your code) looks like. I made no attempt to avoid breaking changes, I just chose a response format that made sense to me. I'm not exactly sure what I'm doing differently here from your implementation. I essentially just traced through YOLOv5's detect.py to see what needed to happen to images before being fed to the model. I'm able to properly serve my model by using this rewrite of torchserve_handler.py. A sample output looks something like this: [
{
"x1": 0.005348515696823597,
"y1": 0.22668543457984924,
"x2": 0.5326404571533203,
"y2": 0.756635844707489,
"confidence": 0.8473663330078125,
"class": "truck"
},
{
"x1": 0.8471749424934387,
"y1": 0.8061075210571289,
"x2": 1.0003573894500732,
"y2": 1.0002152919769287,
"confidence": 0.807182252407074,
"class": "person"
},
{
"x1": 0.5227496027946472,
"y1": 0.35219940543174744,
"x2": 0.7011967897415161,
"y2": 0.6181403398513794,
"confidence": 0.6530587077140808,
"class": "truck"
},
{
"x1": 0.5219056010246277,
"y1": 0.35231834650039673,
"x2": 0.9571782350540161,
"y2": 0.6313372850418091,
"confidence": 0.5114378333091736,
"class": "truck"
}
] |
Thanks to all the contributors for taking time out of their busy schedules to maintain I have confirmed that it is working correctly with the code in THIS!!! It worked fine, at least in the CPU environment.
{
"x1": 0.4146665930747986,
"y1": 0.6899288892745972,
"x2": 0.4722073972225189,
"y2": 0.7431305646896362,
"confidence": 0.43372729420661926,
"class": "person"
},
...
Thanks all contributer!!!! |
Shouldnt we rescale the bbox dimensions to the original image size? as the outputs indicated above are for the 640x640 image Edit: No, because the values are normalized to percentages. |
I was in the same situation with a custom trained YOLOv5s release 7.0 from this roboflow notebook. Your torchserve_handler.py saved my day, although at the first try I had an exception on inference in the TorchServe log about incompatibility between torch and torchvision packages.
I installed |
Sure @khelkun , go ahead with the PR. Sorry, i'm not active on this repo since some time already. Not planning on working on it right now (already pretty busy with work). However, if there is any PR I'll review them with pleasure. |
about alternative torchserve_handler.py suggested in [this issue](louisoutin#6)
Hi all, I recently rearranged all materials for deploying YOLOv5(tag:7.0) on torchserve. |
I am trying to run this with my weights trained on the latest version on yolo v5 but I don't get predictions.
Is there something that could change?
The text was updated successfully, but these errors were encountered: