-
Notifications
You must be signed in to change notification settings - Fork 530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perform inference on an opencv image loaded from memory #1834
Comments
The A somewhat related issue where you can find a code snippet of printing all detections: |
I double checked that i'm feeding the correct image type, tested with lower confidence (went from 0.7 down to 0.5), and even switched the loaded model to "yolo_nas_m" to make sure it wasn't the model itself and haven't been able to get any inferences working. I'm checking the size of prediction.bboxes_xyxy before accesing it so the program doesn't crush but every one of those is empty. |
Update:
If i then do the inference and try to access the predictions, everything works as expected. If instead i try to instantiate the model with gpu acceleration (with the following modification)
This is when I always get empty predictions. Additional info:
Any help on this matter would be appreciated @BloodAxe |
I don't see how it can be happening. Please double-check everything on your end.
Please share additional details what GPU you have and OS/python version you are using. If possible, provide a minimal yet complete code that reproduce your issue. |
I'm providing some code that opens a videocapture from the webcam, extracts individual frames and tries to perform inference on both the CPU and GPU for comparison:
Additional hardware information: Package information (using a conda environment): Hope that is enough to reproduce the behaviour @BloodAxe EDIT: additional software info |
Thanks for the detailed snippet to reproduce. Unfortunately I was not able to reproduce the issue yet. On my 4090 it works fine and predictions on CPU & GPU are identical. I will try later on 1070 which I happen to have and will let you know how it goes. Update: Code works well on both 4090 and 1070 🤷♂️ |
Ok, probably this is where it all coming from pytorch/pytorch#58123 On our end we will introduce an |
We have a workaround PR to disable mixed precision used in |
By changing line 24 of the snippet to: |
💡 Your Question
Is it possible to perform inference on an image already in memory (through Opencv videocapture)?
I have tried to do so but get no detections when trying to perform inference in the following way:
In doing so and debugging the code i get an error in the line:
print (prediction.bboxes_xyxy[0]) with the following text:
File "D:\source\repos\libMuinen_UI\nas_interface.py", line 26, in detect
print (prediction.bboxes_xyxy[0])
IndexError: index 0 is out of bounds for axis 0 with size 0
Which if I understand correctly means that the prediction results are empty and can't access them but from the input image I know i should be getting at least some detections.
Would love to know if i'm using incorrectly the predict method or any tips to make my code work.
Versions
No response
The text was updated successfully, but these errors were encountered: