You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now I want to make an historical forecast on my own data but both retrain = False and retrain = True options do not behave like I want. If I put retrain = False, I kept all the knowledge which came from the big M4 dataset but I never finetune on my data. If I put retrain = True, I lost all the knowledge and train from scratch on my (too) little dataset.
Describe proposed solution
Is it possible to start from the pretrained model at each step of the historical forecast instead of starting from scratch? I understood the principle of starting from 0 each time but this "0" could be some pretraining on bigger dataset especially for deep learning model. Otherwise the historical forecast is really slow.
I think it was possible to do that before #1461, I think both scratch training and finetuning options should be available otherwise we have to design a raw historical forecast with multiple fits which is less handy.
Thank you a lot in advance for your answer
The text was updated successfully, but these errors were encountered:
Fine-tuning will not be possible with historical_forecsasts() (at least not in the near future).
It is a non-trivial tasks which might require freezing specific parts of the architecture, changing the learning rates, optimizer, etc.
In the current Darts version it is not yet possible to change these parameters once the model is trained -> fine-tuning is not really supported.
The upcoming darts version (release approx. within the next two weeks) comes with a few improvements. For example users will be able to create a model with a new optimizer, learning, etc. and load only weights from a pretrained/saved model. This allows to fine-tune the model with different settings.
I'd recommend to wait until the new version is released, and then to write a custom historical forecasting fine-tuning logic, that performs what you need.
Is your feature request related to a current problem? Please describe.
Hello,
I pretrained the NBEATS model on the M4 dataset as explained here in your tutorial: https://medium.com/unit8-machine-learning-publication/transfer-learning-for-time-series-forecasting-87f39e375278
Now I want to make an historical forecast on my own data but both retrain = False and retrain = True options do not behave like I want. If I put retrain = False, I kept all the knowledge which came from the big M4 dataset but I never finetune on my data. If I put retrain = True, I lost all the knowledge and train from scratch on my (too) little dataset.
Describe proposed solution
Is it possible to start from the pretrained model at each step of the historical forecast instead of starting from scratch? I understood the principle of starting from 0 each time but this "0" could be some pretraining on bigger dataset especially for deep learning model. Otherwise the historical forecast is really slow.
I think it was possible to do that before #1461, I think both scratch training and finetuning options should be available otherwise we have to design a raw historical forecast with multiple fits which is less handy.
Thank you a lot in advance for your answer
The text was updated successfully, but these errors were encountered: