Skip to content

Improve forecast with step changes #2466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tveasey opened this issue Mar 14, 2023 · 0 comments · Fixed by #2591
Closed

Improve forecast with step changes #2466

tveasey opened this issue Mar 14, 2023 · 0 comments · Fixed by #2591
Assignees

Comments

@tveasey
Copy link
Contributor

tveasey commented Mar 14, 2023

When creating forecasts for time series which have step changes we create a model of conditions under which we expect the time series to step, specifically the values at and interval between steps, based on historical data. This is a probabilistic model so we run a number of roll outs to estimate an expected value and distribution. We have seen this misbehaving when the forecast time series value is too far from the values for which we have a reasonable characterisation of this distribution. It would be more appropriate to be cautious in such cases: at the moment the behaviour depends on the choice of characterisation of distribution tail values. This issue covers the work to detect and avoid stepping the time series in such cases.

@tveasey tveasey self-assigned this Mar 14, 2023
tveasey added a commit that referenced this issue Nov 2, 2023
We model the level of a time series which we've observed having step discontinuities via a Markov process
for forecasting. Specifically, we estimate the historical step size distribution and the distribution of the steps
in time and as a function of the time series value. For this second part we use an online naive Bayes model
to estimate the probability that at any given point in a roll out for forecasting we will get a step.

This approach generally works well unless we're in the tails of the distribution values we've observed for
the time series historically when we roll out. In this case, our prediction probability are very sensitive to the
tail behaviour of the distributions we fit to the time series values where we saw a step and sometimes we
predict far too many steps as a result. We can detect this case: when we're in the tails of time series value
distribution.

This change does this and stops predicting changes in such cases, which avoids pathologies. This fixes #2466.
tveasey added a commit that referenced this issue Nov 3, 2023
We model the level of a time series which we've observed having step discontinuities via a Markov process
for forecasting. Specifically, we estimate the historical step size distribution and the distribution of the steps
in time and as a function of the time series value. For this second part we use an online naive Bayes model
to estimate the probability that at any given point in a roll out for forecasting we will get a step.

This approach generally works well unless we're in the tails of the distribution values we've observed for
the time series historically when we roll out. In this case, our prediction probability are very sensitive to the
tail behaviour of the distributions we fit to the time series values where we saw a step and sometimes we
predict far too many steps as a result. We can detect this case: when we're in the tails of time series value
distribution.

This change does this and stops predicting changes in such cases, which avoids pathologies. This fixes #2466.
tveasey added a commit that referenced this issue Dec 7, 2023
We model the level of a time series which we've observed having step discontinuities via a Markov process
for forecasting. Specifically, we estimate the historical step size distribution and the distribution of the steps
in time and as a function of the time series value. For this second part we use an online naive Bayes model
to estimate the probability that at any given point in a roll out for forecasting we will get a step.

This approach generally works well unless we're in the tails of the distribution values we've observed for
the time series historically when we roll out. In this case, our prediction probability are very sensitive to the
tail behaviour of the distributions we fit to the time series values where we saw a step and sometimes we
predict far too many steps as a result. We can detect this case: when we're in the tails of time series value
distribution.

This change does this and stops predicting changes in such cases, which avoids pathologies. This fixes #2466.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant