fix cross_validation results with uneven windows #989
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The
cross_validation
method always produces the same number of windows for each serie, regardless of its size, so we may end up with times that the original serie doesn't have.neuralforecast/neuralforecast/core.py
Lines 861 to 865 in 0c1a760
This conflicts with the definition of the cv_times function, which only keeps the windows that could be produced by a serie when performing cross validation, i.e. if a serie has 51 samples and we use
window_size=10, step_size=10
, then it can produce at most 5 windows (where the first window has 1 training sample)https://github.com/Nixtla/utilsforecast/blob/fe357c49a3b3007256eb54bf586656dd5f3de2f6/utilsforecast/processing.py#L489
So we could end up with dataframes that had a different number of rows and perform a horizontal stack
neuralforecast/neuralforecast/core.py
Line 890 in 0c1a760
which would produce a lot of rows with null values and place the forecasts in the wrong places.
This takes the times produced by the cv_times function and extracts the forecasts that correspond to those times from all the forecasts that were produced. Ideally we should make sure that we don't produce windows that don't exist in the first place to avoid also wasting compute when running inference on windows full of zeros.
Also fixes some failing tests by increasing the tolerance and reduces the max_steps in the BiTCN model to reduce the CI time.