-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Added cross-validation tutorial #897
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-22T15:52:38Z typo in category (says catefory). also please use the term timestamp for |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-22T15:52:39Z Line #2. StatsForecast.plot(Y_df) This method calls |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-22T15:52:40Z I think we could explain with more detail here how the process works, since it's a bit different with respect to the other libs. The models are trained only once and are used to generate predictions over several windows. The |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-22T15:52:41Z The 80 and 90% intervals are because those are the defaults of the MQLoss, I think we should clarify that. |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-22T15:52:42Z Please use the following here: from utilsforecast.evaluation import evaluate from utilsforecast.losses import rmse The MMenchero commented on 2024-02-28T02:47:09Z All suggested changes have been implemented. |
All suggested changes have been implemented. View entire conversation on ReviewNB |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-28T16:58:28Z Should be MMenchero commented on 2024-02-29T03:33:22Z thanks, missed that one |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-28T16:58:29Z Line #1. cv_df = cv_df.reset_index() I think it'd be better to set MMenchero commented on 2024-02-29T03:33:50Z TIL, that's pretty useful |
thanks, missed that one View entire conversation on ReviewNB |
TIL, that's pretty useful View entire conversation on ReviewNB |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-29T15:27:21Z Can you please set MMenchero commented on 2024-03-01T01:58:45Z Got it, and I'll open the issue in utilsforecast since I think it'll be very useful. |
View / edit / reply to this conversation on ReviewNB jmoralez commented on 2024-02-29T15:27:22Z I missed the last part here, we should remove it (
MMenchero commented on 2024-03-01T01:59:26Z I also missed it, but is now deleted. |
View / edit / reply to this conversation on ReviewNB cchallu commented on 2024-02-29T19:54:12Z Can we remove this equation? to keep the simpler. Maybe do a callout box (see other tutorials) with this information and the link to more details MMenchero commented on 2024-03-01T04:31:53Z sure, I added a callout box and kept it simple in the main text. |
View / edit / reply to this conversation on ReviewNB cchallu commented on 2024-02-29T19:54:13Z Sorry for not saying it earlier, but I strongly suggest to switch to NHITS and LSTM. The Auto models add a layer of complexity, because we are not explaining the validation set. We can use the NHTIS and LSTM with their default parameters as well MMenchero commented on 2024-03-01T04:36:51Z I dropped the auto models, but in order to keep Capi's suggestion of setting |
View / edit / reply to this conversation on ReviewNB cchallu commented on 2024-02-29T19:54:14Z Can we add the plot with sliding windows on the predict insample tutorial? And the explanation above taking it from the predict insample tutorial as well:
With the
The following diagram shows how the forecasts are produced based on the |
View / edit / reply to this conversation on ReviewNB cchallu commented on 2024-02-29T19:54:15Z I think we need an additional paragraph or even a plot. The main goal of the tutorial is explaining cross-validation, so we need to provide all the possible details and explanation. For example:
With the current parameters, cross validation will look like this: (and add a plot, with an example of a time series with the three windows at the end. And mention that the model will be trained with info prior to x timestamp, and so on. MMenchero commented on 2024-03-01T06:44:17Z I added an additional paragraph + plots at the end to further clarify the concept. It's at the end because I need to rename a column to make the plots and that is done in section 5. |
View / edit / reply to this conversation on ReviewNB cchallu commented on 2024-02-29T19:54:15Z same as before, remove equation |
got it. View entire conversation on ReviewNB |
I also missed it, but is now deleted. View entire conversation on ReviewNB |
sure, I added a callout box and kept it simple in the main text. View entire conversation on ReviewNB |
I'm now using the default values, but in order to keep Capi's suggestion of setting View entire conversation on ReviewNB |
I added an additional paragraph + plots at the end to further clarify the concept. It's at the end because I need to rename a column to make the plots and that is done in section 5. View entire conversation on ReviewNB |
* feat: Added cross-validation tutorial * fix: Made suggested changes to cross-validation tutorial * fix: Removed fix for MPS issue * fix: Removed metadata * fix: Fixed typo and added environ var * fix: Fixed typo and added environ var * fix: Added new suggestions --------- Co-authored-by: Cristian Challu <cristiani.challu@gmail.com>
This is part of the new docs for neuralforecast.