You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm seeing a large difference in the performance of the forecast ets function and the fable ETS function when estimating multiple models, surprisingly in favor of forecast. Here's a simple example using the "tourism" data set from fpp3. The inelegant loop in the forecast section is intentional. Apologies for the rest of the code in advance:
Time for `fable` estimation & forecast: 106.93 sec
Time for `forecast` estimation & forecast: 37.03 sec
The comparison_summary table seems to indicate that the models are giving the same forecasts, only fable is taking almost 3 times as long as using forecast in a simple loop.
Am I using either of the functions incorrectly? My first thought was that fable ETS is searching a much larger set of models, but on default settings the search space for both algorithms should be the same.
I've cross-posted this to StackOverflow in case this is just a simple error or misunderstanding on my part. If I get an answer there I will close the issue.
Thanks for the great library!
The text was updated successfully, but these errors were encountered:
I'm able to reproduce this.
The relative performance shouldn't be this bad. I expect fable to be slightly slower than forecast when the model code is identical (due to some added overhead of working with tibbles rather than directly with vectors, and accepting more general time series inputs), but there is something problematic here.
I had this issue yesterday.
I suspect it has more problem with the forecast (the method in fable, not the package) than the fitting because ETS took 2 hours to estimate on my data set with 4 future workers, and 5 hours to forecast with a sequential process (Memory explodes with more than one process which is worrying on its own).
But running the example above, fable took 53 seconds to estimate, and 55 to forecast, whereas forecast only took 16 seconds for everything. Now I'm not sure.
By injecting some timing codes in the fabletools package, I found that fabletools (model function) takes 1s to prepare the estimating, 38s to fit the models (check the data, train the ets model) and 2s to prepare the output.
Stepping deeper, I found that ets training in fable (compare_ets and etsmodel function) is 20% slower than the implementation in forecast. I cannot figure out why, because the implementation seems very similar.
BTW, the time difference on my another machine is around 4s, which seems acceptable.
I'm seeing a large difference in the performance of the forecast ets function and the fable ETS function when estimating multiple models, surprisingly in favor of forecast. Here's a simple example using the "tourism" data set from fpp3. The inelegant loop in the forecast section is intentional. Apologies for the rest of the code in advance:
On my machine, the output is
The comparison_summary table seems to indicate that the models are giving the same forecasts, only fable is taking almost 3 times as long as using forecast in a simple loop.
Am I using either of the functions incorrectly? My first thought was that fable ETS is searching a much larger set of models, but on default settings the search space for both algorithms should be the same.
I've cross-posted this to StackOverflow in case this is just a simple error or misunderstanding on my part. If I get an answer there I will close the issue.
Thanks for the great library!
The text was updated successfully, but these errors were encountered: