-
Notifications
You must be signed in to change notification settings - Fork 904
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/likelihood parameters prediction #1811
Conversation
…instead of sampling them
…likelihood model parameters instead of the predicted target values
Codecov ReportPatch coverage:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## master #1811 +/- ##
==========================================
- Coverage 94.00% 93.84% -0.17%
==========================================
Files 126 126
Lines 11905 12162 +257
==========================================
+ Hits 11191 11413 +222
- Misses 714 749 +35
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice @madtoinou, this will be a great feature 🚀
From some points from our discussions:
- enforce n <= output_chunk_length when
likelihood_params=True
- enforce
num_samples == 1
whenlikelihood_params=True
(I tried with DLinear, and it didn't raise an error withnum_samples>1
- avoid sampling when
likelihood_params=True
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
…inting from LikelihoodMixin
…ipulation more clear
…not class specific anymore, loop in the test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's update the self.output_chunk_length
of all TFMs to self._output_chunk_length
as well, and then update the property accordingly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, I tried to keep self.output_chunk_length
when possible (available after the model/pl module is instantiated)
…ccessible with the getter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Congrats @madtoinou for this great PR 💯
Ready to be merged 🚀
Fixes #1735, fixes #1445.
Summary
Edit: Final version:
After fitting a regression or torch-based model with a likelihood, it's possible to pass the flag
predict_likelihood_parameters=True
topredict()
and directly predict the distribution parameters instead of sampling (which remains the best way to simulate/visualize uncertainty).Draft version:
The distribution parameters are returned by the
Likelihood.sample()
method (in addition of the sampled values), and processed in the same way as the target forecast values inpredict()
/get_batch_prediction()
which remain necessary if the number of forecasted values is great (auto-regression). If the flag is set, the output ofpredict()
contains only the distribution parameters (no target forecast)Other Information
Based on @hrzn comment in #1445, a warning should probably be raised if the predict TimeSeries contain many component to avoid dimension explosion. Another solution would eventually be to restrict this feature only to a few Likelihoods.
Verifying that the model converge to the good parameters is too time consuming for the unittests but I did check manually for some distributions (Gaussian, Poisson, Quantile) and it looked rather good.