-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model multiple parameters on TPU #1400
Comments
1.upgrade to master .load_from_checkpoint(PATH, dataset=YourDataset) |
|
oh i see. yeah, the dataset argument in your constructor is breaking the load. For the trainer to autoload you have to only use hparams (put the dataset in the hparams object which can be a dict as well). Or second option is to submit a PR to enable loading other params as well |
this has nothing to do with TPUs btw. |
Yes, in my code I've moved dataset to hparams as you suggested, but I suppose there should some check against the original problem for the future users. I mentioned the TPU, because when I checked the same code on GPU and CPU runtimes the error was not raised. Probably the if regarding the proc rank introduced recently worked. |
I have a similar error. I passed in Update: if I use single argument (combining them into a dict), I have the following error:
|
this shall be fixed with #2047 |
@rzepinskip Hi, I come with a similar case, I use multi GPU and the model class (pl.LightningModule) also take multiple init parameters. When I load the checkpoint, it raised the exactly the same error, has this been fixed?? |
🐛 Bug
load_from_checkpoint
fails for model with additional required parameters (besideshparams
) in model constructor on TPU with more than 1 core.To Reproduce
Steps to reproduce the behavior:
hparams
) in model constructor e.g.dataset
Code sample
Google Colab Notebook
Expected behavior
Model parameters are saved and loaded correctly.
Environment
conda
,pip
, source): pipThe text was updated successfully, but these errors were encountered: