-
Notifications
You must be signed in to change notification settings - Fork 884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix historical forecasts retraining of TFMs #1465
Conversation
Codecov ReportBase: 93.95% // Head: 93.91% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #1465 +/- ##
==========================================
- Coverage 93.95% 93.91% -0.04%
==========================================
Files 122 122
Lines 10728 10730 +2
==========================================
- Hits 10079 10077 -2
- Misses 649 653 +4
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thanks!!
I would say we shouldn't mess at all anymore with self
:)
There's an error in the quickstart notebook now, could you have a look?
@@ -871,13 +865,17 @@ def historical_forecasts( | |||
if future_covariates_ | |||
else None, | |||
): | |||
self._fit_wrapper( | |||
# avoid fitting the same model multiple times | |||
model = self if not self._fit_called else self.untrained_model() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking we should always call untrained_model()
even for the first pass here, WDYT?
Extremely good that you clarified this, Kudos! |
I recon, this would entail some decisions about what to do with the optimizer state, the LR_schedule,... I am struggling right now to do a reset of these without touching the model weights. I can give some input if someone is willing to try a less hacky way then me. |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
As discussed, leaving the model object unchanged in historical_forecasts() brought some necessary changes with it. See for example the quickstart notebook. |
Hi @solalatus. I believe the fine tuning would make the historical_forecasts too complex. Also this would only be supported by our TorchForecastingModel (TFM). I would rather opt for making it easier to fine-tune our TFMs in general. IMO it's best keep this separate, but let me know if I'm missing something |
Absolutely agree. I was thinking about some different function then the standard historical forecast. In fact, as I tried to think it over in this "issue" a reinit feature and the normal fit would suffice. I hacked together a really ugly solution you can see in this gist, any feedback and thoughts are much appreciated! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @dennisbader
Fixes #1461.
Summary
Currently
historical_forecasts
trains the same model object multiple times. For TorchForecastingModels this an issue as instead of training a fresh model from scratch for each iteration, we are rather "fine-tuning" the existing model.retrain=True
Important things to consider
Below are some changes to the current behavior. IMO these make sense, but let me know if think of better approaches:
historical_forecasts
(HF) on an already fitted model withretrain=True
will raise an error if input dimensions/covariates/... do not match the input from initial training -> this will not be raised with this PR as we train new model from scratchself
gets updated (with fit calls) in each iteration of HF using retrain=True -> with this PR, the instance gets only updated in the first iteration if it has not been fit before. (?) Do we even want to "mess" withself
at all?