-
-
Notifications
You must be signed in to change notification settings - Fork 16.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
is there some error with dynamic export for TensorRT? #8866
Comments
@hdnh2006 I'm not able to test with multiple streams, but I did test with detect.py normally (batch size 1) and PyTorch Hub (batch size 2) and it works correctly, i.e. !python export.py --weights yolov5s.pt --include engine --imgsz 640 --device 0 --dynamic --batch 8 # export
!python detect.py --weights yolov5s.engine --imgsz 640 --device 0 # inference
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.engine')
# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save() Do you have a streams.txt with public addresses we could use to debug? |
Thanks Glenn for your quick reply, I can share with you my "streams.txt" file in private. |
@hdnh2006 maybe there's an easier way. Can you reproduce the issue by running your TRT model with val.py at different batch-sizes? |
@hdnh2006 more info here. When I export at --batch 8 and run val.py with --batch 1 and --batch 2 they both work but they both seem locked into --batch-size 8 inference. This is probably why 3 streams are not working, they are at batch size 3 instead of 8. I think this is being forced in DetectMultiBackend, I'll take a look. |
Yes, you are right, I am getting same logs.
|
@hdnh2006 it looks like there is some bug in TRT dynamic handling in DetectMultiBackend. I've added a TODO to resolve this. In the meantime I would export at a fixed batch size, i.e. --batch 3 and avoid dynamic. |
@democat3457 can you take a look at this bug report? It's related to your PR #8526. It seems that --dynamic TRT models are always outputting their max match size, i.e. im = 'data/images/zidane.jpg'
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.engine') # exported with --dynamic --batch 8
results = model(im)
print(len(results)) # returns 8, but only first image has meaningful output, results 2-7 appear uninitialized data I can always update the output in DetectMultiBackend to remove excess results, but this seems a bit hackish. Is this the way TRT dynamic is supposed to operate or is there a problem? |
@hdnh2006 @democat3457 possible fix in #8869, but a little more hack-ish than I'd like. |
@hdnh2006 good news 😃! Your original issue may now be fixed ✅ in PR #8869 with the help of @democat3457. To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
Search before asking
YOLOv5 Component
Export
Bug
Hi, I was trying the new feature of dynamic size for TensorRT. I exported
yolov5s.pt
to TensorRT using the following command:According to this the max batch size is set as 8, so I tried with a couple of ip cameras I have for testing and I got the following:
I have to tell that the normal
yolov5s.pt
works perfectly, so it is not a proble of my ip cameras:Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: