-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finetuning trained model #75
Comments
EMA means Exponential Moving Average (EMA). See details for Tacotron 2 paper: https://arxiv.org/abs/1712.05884. There's no clear answer which to use for finetuning, but I usually used EMA version of the checkpoint when I trained a model for sufficient time (e.g, over 2 days for MoL case). I'll leave some commands I used for LJSpeech experiments which might help as follows:
The two commands above were actually used for training the pre-traind model. |
Thanks! Will try to fine tune my model now. |
Anyone can share the minimum number of iterations which is enough for fine-tuning? |
In reference to recommendation how should repeated training be executed for the second time?
Should checkpoint path point to checkppoints/checkpoint_step...._ema.pth or not ema checkpoint? And what does ema stands for?
The text was updated successfully, but these errors were encountered: