Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiprocessing leads to Module Not Found error if absolute imports are used in modules #353

Open
sammlapp opened this issue Oct 6, 2024 · 2 comments

Comments

@sammlapp
Copy link

sammlapp commented Oct 6, 2024

I have a github repo where I define some models and a hubconf.py file for access via the torch.hub.load() API. The models work fine when there is no multi-processing (ie num_workers=0 for dataloader), but fail with an error about pickling and Module Not Found if two conditions are true: (1) num_workers>0 and (2) the module that the model object is defined in uses an absolute import for another module in my repo.

For example, with the structure:

my_repo/
    utils.py
    a/
        model_a.py # contains class ModelA
    ...

if model_a.py has from my_repo import utils, and we import ModelA and try to use >0 workers in a DataLoader, error looks like this:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File ".../python3.10/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File ".../python3.10/multiprocessing/spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
ModuleNotFoundError: No module named 'my_repo'

This makes some sense, because we never installed my_repo as a package when loading ModelA - somewhere, it seems multiprocessing tries to recreate or reimport things and does not find my_repo.

Edit: I thought relative imports might be a workaround, but they don't fix the issue

Is there a solution to this? Thanks!

@NicolasHug
Copy link
Member

NicolasHug commented Oct 7, 2024

Hi @sammlapp , thanks for the report. Just to make sure, can you run and import the model properly without calling torch.hub.load()?
If yes, then sorry, you're probably hitting one of the few edge cases that exist when using torch.hub, and I'm afraid there isn't an obvious work-around that comes to mind for this.

@sammlapp
Copy link
Author

sammlapp commented Oct 7, 2024

Yes, if I install or import the package locally there is no issue. This seems like a pretty important/common use case, because (1) packages with more than just a few dozen lines of code often use imports from other modules, and (2) multi-processing using torch.DataLoader with num_workers>0 is an extremely common workflow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants