-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Module collision when loading more than one model with Torch Hub #243
Comments
Thanks for the detailed report @carloalbertobono , I can reproduce the issue. I'll look into this |
As a temporary and ugly workaround, try this: import torch
import sys
torch.hub.load('facebookresearch/detr', 'detr_resnet50', pretrained=True)
sys.modules.pop('models') # ¯\_(ツ)_/¯
torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) The error comes from the fact that the modules that were imported in the first hubconf file (the detr one) are still present in the imported module cache, even if the detr folder has been removed from import torch
torch.hub.load('facebookresearch/detr', 'detr_resnet50', pretrained=True)
import models # there's no "models" dir or package but it still works because it's the "models" module from the detr repo and it's still in the modules cache
print(models)
# prints <module 'models' from '/Users/nicolashug/.cache/torch/hub/facebookresearch_detr_main/models/__init__.py'> |
Hi @NicolasHug , thank you very much for looking into this 🙏 I can confirm that the workaround works like a charm. As far as I'm concerned, this is solved. All the best |
@vmoens and I have tried to find a reasonable solution to fix this issue, but we were unable to find a solution that would be simple, non-magical, and fully torchscript-proof. My closest attempt is #247 (comment), but it still fails on some very specific case related to torchscript. Even if it didn't, I think the solution is a bit too magic to be reasonable. Then again, this bug happens because of pre-existing torchhub magic, so it's no wonder that more magic is needed to fix it. Anyway - I'm afraid we'll have to mark this as a wont-fix issue. I provided a workaround above, and I'll make a PR to torch core to mention this in the "Known limitations" section of the torchhub docs page. Thanks again for the report @carloalbertobono |
Then thank you very much @NicolasHug and @vmoens for the prompt help and of course for the work! |
…hub docs" This is a follow up to pytorch/hub#243 [ghstack-poisoned]
…69970) Summary: Pull Request resolved: #69970 This is a follow up to pytorch/hub#243 Test Plan: Imported from OSS Reviewed By: jbschlosser Differential Revision: D33124060 Pulled By: NicolasHug fbshipit-source-id: 298fe14b39a1aff3e0b029044c9a0db8bc82336a
Hi, I'm having an issue with loading multiple specific models with torch.hub.load
Originally posted the description here and @glenn-jocher suggested to post it here too
I'm pasting the original issue below:
Hi, I think I have a similar issue to 2414
It prevents from loading more than one model using torch.hub, using specific models
If I'm not mistaken by reading the thread, loading the model with torch.hub shadows some module names, that then become unusable within torch.
I'm using torch '1.9.1+cu102' on a Ubuntu 20.04 machine and to reproduce I do:
that ends up in
reversing the load order obviously ends up with:
Is there some workaround which I'm not seeing?
Thank you very much, also for the awesome project itself
cb
The text was updated successfully, but these errors were encountered: