Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backtest can accept a list of callable as metrics functions #1333

Merged
merged 5 commits into from
Nov 9, 2022

Conversation

madtoinou
Copy link
Collaborator

Fixes #1330.

Summary

Added the support of a sequence of callable as metric function to be applied to measure the distance between the ground-truth and the prediction during backtesting. The model does not have to be retrained to obtain values for each metric function.

Other Information

  • The backtest function signature remains identical as before to maintain retro-compatibility with existing tests and code.
  • This functionality was not extended to the gridsearch method as optimizing several metrics comes with additional challenges and parameters (weighting of the various metric functions for example). However, such feature could be implemented if necessary.

…est method, retro-compatibility with previous version by unpacking the list when only one metric is used
@codecov-commenter
Copy link

codecov-commenter commented Nov 2, 2022

Codecov Report

Base: 93.87% // Head: 93.88% // Increases project coverage by +0.00% 🎉

Coverage data is based on head (4ab4b5f) compared to base (06e8e13).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1333   +/-   ##
=======================================
  Coverage   93.87%   93.88%           
=======================================
  Files          78       78           
  Lines        8512     8504    -8     
=======================================
- Hits         7991     7984    -7     
+ Misses        521      520    -1     
Impacted Files Coverage Δ
darts/models/forecasting/forecasting_model.py 97.01% <100.00%> (+0.33%) ⬆️
darts/timeseries.py 92.30% <0.00%> (-0.06%) ⬇️
...arts/models/forecasting/torch_forecasting_model.py 87.08% <0.00%> (-0.06%) ⬇️
darts/models/forecasting/block_rnn_model.py 98.24% <0.00%> (-0.04%) ⬇️
darts/models/forecasting/nhits.py 99.27% <0.00%> (-0.01%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Contributor

@hrzn hrzn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @madtoinou ! Just got a small comment about the docstring

darts/models/forecasting/forecasting_model.py Outdated Show resolved Hide resolved
@hrzn hrzn merged commit 1547d5b into master Nov 9, 2022
@madtoinou madtoinou deleted the feat/backtest-multiple-metrics branch March 12, 2023 14:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add a list of possible metrics to be measured in backtest
3 participants