You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How would I extend this model to n+1 target speakers to perform any/many to many conversion? When I increase the number of speakers to include the speakers in LibriTTS + our dataset and use the pretrained cotatron weights, I get an embedding mismatch error when attempting to train the decoder because of the different dimensions which is derived from the speakers_list in the global config.yaml. Do I simply keep the speakers_list the same, i.e. don't include our dataset speaker names (include only LibriTTS + VCTK), but train the decoder/synthesizer on the combined data which includes LibriTTS + our dataset?
Thanks
The text was updated successfully, but these errors were encountered:
Hello,
How would I extend this model to n+1 target speakers to perform any/many to many conversion? When I increase the number of speakers to include the speakers in LibriTTS + our dataset and use the pretrained cotatron weights, I get an embedding mismatch error when attempting to train the decoder because of the different dimensions which is derived from the speakers_list in the global config.yaml. Do I simply keep the speakers_list the same, i.e. don't include our dataset speaker names (include only LibriTTS + VCTK), but train the decoder/synthesizer on the combined data which includes LibriTTS + our dataset?
Thanks
The text was updated successfully, but these errors were encountered: