Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use nn.DataParallel with GraphINN #112

Open
shakediel opened this issue Feb 23, 2022 · 0 comments
Open

Unable to use nn.DataParallel with GraphINN #112

shakediel opened this issue Feb 23, 2022 · 0 comments

Comments

@shakediel
Copy link

Hello,

I wish to train a GraphINN model on multiple gpus using pytorchs nn.DataParallel

I keep getting an error saying that the weights and the inputs are not on the same device

I think that the error stems from the lines:

`

    if has_condition:
        mod_out = node.module(mod_in, c=mod_c, rev=rev, jac=jac)
    
    else:
        mod_out = node.module(mod_in, rev=rev, jac=jac)

`

meaning that the operation isnt called from the self.module_list which holds the modules parameters but from the self.node_list itself, thus the weights are kept on the original device and are not transferred to the new device as intended

Does anyone has any possible direction at solving this problem?
Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant