Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reduece the inference time? #24

Closed
sky-fly97 opened this issue Jun 8, 2020 · 7 comments
Closed

How to reduece the inference time? #24

sky-fly97 opened this issue Jun 8, 2020 · 7 comments
Labels
enhancement New feature or request Stale Stale and schedule for closing soon

Comments

@sky-fly97
Copy link

sky-fly97 commented Jun 8, 2020

Compared with v3-spp, the time of inference is twice long. And the aug is important so that I can not remove it.
Is there any way to speed up without losing too much map?

@sky-fly97 sky-fly97 added the enhancement New feature or request label Jun 8, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Jun 8, 2020

Hello @sky-fly97, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Google Colab Notebook, Docker Image, and GCP Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI surveillance systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

@sky-fly97 we are continuously researching speed and accuracy improvements. If you have proven ideas that show quantitative improvement we'd be happy to integrate them into our work.

If you have GPU resources you'd like to contribute to the research we can send you docker files to run as well, which will speed up our research and lead to improvements faster.

@glenn-jocher
Copy link
Member

@sky-fly97 new models have been released yesterday which are smaller and faster. See the readme table for updated speeds.

@sky-fly97
Copy link
Author

@sky-fly97 new models have been released yesterday which are smaller and faster. See the readme table for updated speeds.

Wow, Thank you for your efforts!

@sljlp
Copy link

sljlp commented Jun 13, 2020

speed of the smallest model inference on cpu is 3 fps? could you please give me some advice on how to speed up?

@glenn-jocher
Copy link
Member

@sljlp I think for any ML model in object detection, CPU performance will always be quite slow. YOLOv5s is faster on CPU than efficientdet D0, but as you see it still does not compare to GPU speeds, or neural engines like Apple's 5 TOPS ANE speed. These are really the only acceptable solutions if you want fast inference today.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 1, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

3 participants