-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to reduece the inference time? #24
Comments
Hello @sky-fly97, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Google Colab Notebook, Docker Image, and GCP Quickstart Guide for example environments. If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:
For more information please visit https://www.ultralytics.com. |
@sky-fly97 we are continuously researching speed and accuracy improvements. If you have proven ideas that show quantitative improvement we'd be happy to integrate them into our work. If you have GPU resources you'd like to contribute to the research we can send you docker files to run as well, which will speed up our research and lead to improvements faster. |
@sky-fly97 new models have been released yesterday which are smaller and faster. See the readme table for updated speeds. |
Wow, Thank you for your efforts! |
speed of the smallest model inference on cpu is 3 fps? could you please give me some advice on how to speed up? |
@sljlp I think for any ML model in object detection, CPU performance will always be quite slow. YOLOv5s is faster on CPU than efficientdet D0, but as you see it still does not compare to GPU speeds, or neural engines like Apple's 5 TOPS ANE speed. These are really the only acceptable solutions if you want fast inference today. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Compared with v3-spp, the time of inference is twice long. And the aug is important so that I can not remove it.
Is there any way to speed up without losing too much map?
The text was updated successfully, but these errors were encountered: