-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add device
argument to PyTorch Hub models
#3104
Add device
argument to PyTorch Hub models
#3104
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👋 Hello @cgerum, thank you for submitting a 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is up-to-date with origin/master. If your PR is behind origin/master an automatic GitHub actions rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
git checkout feature # <----- replace 'feature' with local branch name
git rebase upstream/master
git push -u origin -f
- ✅ Verify all Continuous Integration (CI) checks are passing.
- ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." -Bruce Lee
/rebase |
@cgerum thanks for the PR! Is there a downside to simply sending the model to CPU after it's created using the normal PyTorch commands like this? import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s').cpu() # send model to CPU |
This approach has two problems:
We can probably work around both problems, by carefully managing our GPU resources and modifying |
nested torch.device(torch.device(device)) ok
device
argument to PyTorch Hub models
@cgerum I've updated this a bit. It's ready to merge on my end, just waiting on the the CI tests to run, they seem unavailable/queued today for unknown reasons. |
Thanks a lot, for me it works like a charm. |
@cgerum ok, I'll go ahead and merge then if the PR works on your system. Hopefully the actions sort themselves out tomorrow. Thank you for your contributions! |
* Allow to manual selection of device for torchhub models * single line device nested torch.device(torch.device(device)) ok Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com> (cherry picked from commit b133baa)
* Allow to manual selection of device for torchhub models * single line device nested torch.device(torch.device(device)) ok Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
For my usecase I would like to load the pretrained models from torchhub to cpu even if cuda is available.
This merge request adds an optional parameter
device
to thehubconf.py
functions to allow manual selection of target devices.🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Added model device selection support to YOLOv5 model creation functions.
📊 Key Changes
device
parameter to the_create()
function and all model creator functions (e.g.,yolov5s
,yolov5m
, etc.).device
parameter allows explicit selection of the computing device (CPU or GPU) where the model parameters will be loaded.🎯 Purpose & Impact