-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix lightning hparams conflict #205
base: main
Are you sure you want to change the base?
fix lightning hparams conflict #205
Conversation
I'm not sure if there's a more efficient way to improve this implementation, if you have it just propose it. Moreover, if you believe that addressing this error, which arises occasionally, is not worth the effort, we can consider closing this pull request (PR). |
I am of the opinion that if the only difference in hparams is load_model (why is that an "hparam" anyway) then its ok. |
It works like a dict so you can do: |
It's possible to use the I'm actually training with the version present on this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see any problems with this approach. As far as I understand the input.yaml file is guaranteed to exist and contain the desired parameters.
cc @raimis @stefdoerr Do you see anything dangerous here?
I feel like it's safer to just update those two parameters than to load the whole config in. |
The |
…into LightningModule_&_LightningDataModule
…into LightningModule_&_LightningDataModule
DO NOT MERGE YET!
In PyTorch Lightning, the LightningModule and LightningDataModule are two separate components with distinct responsibilities. Occasionally, conflicts in hyperparameters may arise between the model and the data loading process, causing issues during training or inference.
For example, in the case that I'm reporting the error comes out when the test is going to be performed because the
best_model.ckpt
was saved before than I resumed a training usingload_model
and from this the keys mismatch. One approach to fix it, is to update thedata.hparams
using the model.hparams. (change proposed by this PR).It's also true that maybe should not be allowed to have a lot of difference in the hparams between the data and the model but just some keys like
load_model
; if this is the case maybe the best solution is to write a function like: