-
Notifications
You must be signed in to change notification settings - Fork 904
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/metrics #129
Fix/metrics #129
Conversation
with self.assertRaises(ValueError): | ||
metrics.smape(self.series1, self.series1) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly to my comment on @pennfranc PR I believe that we should split these tests and try to keep a single test for a single assert or at least a single test for a single failure reason. (each test method name should describe what it is testing). The problem with the current tests is that if mape is failing we have no idea if it is only mape or other methods will also fail like smape also I believe the reason for failing is not very clear each time. Let me know what you think :)
darts/metrics/metrics.py
Outdated
@@ -303,6 +351,8 @@ def mase(actual_series: TimeSeries, | |||
The series of actual values | |||
pred_series | |||
The series of predicted values | |||
insample | |||
The series used for training |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not very clear IMO. What does training refer to here? Do we have to have trained a model in order to use Mase? I think it requires better explanation, and also I would consider making this one optional and reverting to the prior behavior when it's not given.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to respect the litterature and the wikipedia page, this error is used for comparing forecast 'accuracy' across multiple time series. So yes, one need to fit a model to forecast, thus having 'insample'. 'outsample' being the forecast.
As we link the wikipedia definition in the docstring, we need to respect it.
better definition of insample is:
The prior series used to forecast pred_series
.
A naive forecast method is used to compute the scale of the data.
The prior behavior does not correspond to the definition of MASE. I am not sure it is quite correct to have a behavior that do not correspond to what the function should do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation @Droxef , you're right, let's go with your change then. I will just suggest a small change in the docstring (in a separate review comment).
darts/metrics/metrics.py
Outdated
@@ -303,6 +351,9 @@ def mase(actual_series: TimeSeries, | |||
The series of actual values | |||
pred_series | |||
The series of predicted values | |||
insample | |||
The prior series used to forecast `pred_series` . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The training series used to forecast `pred_series` .
This series serves to compute the scale of the error obtained by a naive forecaster on the training data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a small docstring change proposed
Fixes DARTS-94.
Summary
Other Information
Add and correct unit tests for the two metrics