Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for other languages #30

Open
yaguangtang opened this issue Jul 2, 2019 · 106 comments
Open

Support for other languages #30

yaguangtang opened this issue Jul 2, 2019 · 106 comments

Comments

@yaguangtang
Copy link

yaguangtang commented Jul 2, 2019

Available languages

Chinese (Mandarin): #811
German: #571*
Swedish: #257*

* Requires Tensorflow 1.x (harder to set up).

Requested languages (not available yet)

Arabic: #871
Czech: #655
English: #388 (UK accent), #429 (Indian accent)
French: #854
Hindi: #525
Italian: #697
Polish: #815
Portuguese: #531
Russian: #707
Spanish: #789
Turkish: #761
Ukrainian: #492

@CorentinJ
Copy link
Owner

CorentinJ commented Jul 2, 2019

You'll need to retrain with your own datasets to get another language running (and it's a lot of work). The speaker encoder is somewhat able to work on a few other languages than English because VoxCeleb is not purely English, but since the synthesizer/vocoder have been trained purely on English data, any voice that is not in English - and even, that does not have a proper English accent - will be cloned very poorly.

@yaguangtang
Copy link
Author

Thanks for explaintation, I have big interest of adding other languages support, and would like to contribute.

@CorentinJ
Copy link
Owner

You'll need a good dataset (at least ~300 hours, high quality and transcripts) in the language of your choice, do you have that?

@tail95
Copy link

tail95 commented Jul 4, 2019

I wanna train another language. How many speakers do I need in the Encoder? or can I use the English speaker embeddings to my language?

@CorentinJ
Copy link
Owner

From here:

A particularity of the SV2TTS framework is that all models can be trained
separately and on distinct datasets. For the encoder, one seeks to have a model
that is robust to noise and able to capture the many characteristics of the human
voice. Therefore, a large corpus of many different speakers would be preferable to
train the encoder, without any strong requirement on the noise level of the audios.
Additionally, the encoder is trained with the GE2E loss which requires no labels other
than the speaker identity. (...) For the datasets of the synthesizer and the vocoder,
transcripts are required and the quality of the generated audio can only be as good
as that of the data. Higher quality and annotated datasets are thus required, which
often means they are smaller in size.

You'll need two datasets:
image

The first one should be a large dataset of untranscribed audio that can be noisy. Think thousands of speakers and thousands of hours. You can get away with a smaller one if you finetune the pretrained speaker encoder. Put maybe 1e-5 as learning rate. I'd recommend 500 speakers at the very least for finetuning. A good source for datasets of other languages is M-AILABS.

The second one needs audio transcripts and high quality audio. Here, finetuning won't be as effective as for the encoder, but you can get away with less data (300-500 hours). You will likely not have the alignments for that dataset, so you'll have to adapt the preprocessing procedure of the synthesizer to not split audio on silences. See the code and you'll understand what I mean.

Don't start training the encoder if you don't have a dataset for the synthesizer/vocoder, you won't be able to do anything then.

@HumanG33k
Copy link

You'll need a good dataset (at least ~300 hours, high quality and transcripts) in the language of your choice, do you have that?

Maybe it can be hacked by using audio book and they pdf2text version. The difficult come i guess from the level of expression on data sources. Maybe with some movies but sometimes subtitles are really poor. Firefox work on dataset to if i remember well

@zbloss
Copy link

zbloss commented Jul 17, 2019

You'll need a good dataset (at least ~300 hours, high quality and transcripts) in the language of your choice, do you have that?

Maybe it can be hacked by using audio book and they pdf2text version. The difficult come i guess from the level of expression on data sources. Maybe with some movies but sometimes subtitles are really poor. Firefox work on dataset to if i remember well

This is something that I have been slowly piecing together. I have been gathering audiobooks and their text versions that are in the public domain (Project Gutenberg & LibriVox Recordings). My goal as of now is to develop a solid package that can gather an audiofile and corresponding book, performing necessary cleaning and such.

Currently this project lives on my C:, but if there's interest in collaboration I'd gladly throw it here on GitHub.

@JasonWei512
Copy link

JasonWei512 commented Jul 19, 2019

How many speakers are needed for synthesizer/vocoder training?

@CorentinJ
Copy link
Owner

You'd want hundreds of speakers at least. In fact, LibriSpeech-clean makes for 460 speakers and it's still not enough.

@boltomli
Copy link

There's an open 12-hour Chinese female voice set from databaker that I tried with tacotron https://github.com/boltomli/tacotron/blob/zh/TRAINING_DATA.md#data-baker-data. Hope that I can gather more Chinese speakers to have a try on voice cloning. I'll update if I have some progress.

@CorentinJ
Copy link
Owner

That's not nearly enough to learn about the variations in speakers. Especially not on a hard language such as Chinese.

@JasonWei512
Copy link

JasonWei512 commented Jul 20, 2019

@boltomli Take a look at this dataset (1505 hours, 6408 speakers, recorded on smartphones):
https://www.datatang.com/webfront/opensource.html
Samples.zip
Not sure if the quality is good enough for encoder training.

@CorentinJ
Copy link
Owner

CorentinJ commented Jul 20, 2019

You actually want the encoder dataset not to always be of good quality, because that makes the encoder robust. It's different for the synthesizer/vocoder, because the quality is the output you will have (at best)

@HumanG33k
Copy link

You'd want hundreds of speakers at least. In fact, LibriSpeech-clean makes for 460 speakers and it's still not enough.

Can not be hack to by create new speakers with ai like it is done for picture ?

@Liujingxiu23
Copy link

How about training the encoder/speaker_verification using English multi-speaker data-sets, but training the synthesizer using Chinese database, suppose both the data are enough for each individual model separately.

@CorentinJ
Copy link
Owner

You can do that, but I would then add the synthesizer dataset in the speaker encoder dataset. In SV2TTS, they use disjoint datasets between the encoder and the synthesizer, but I think it's simply to demonstrate that the speaker encoder generalizes well (the paper is presented as a transfer learning paper over a voice cloning paper after all).

There's no guarantee the speaker encoder works well on different languages than it was trained on. Considering the difficulty of generating good Chinese speech, you might want to do your best at finding really good datasets rather than hack your way around everything.

@Liujingxiu23
Copy link

@CorentinJ Thank you for your reply,may be I should find some Chinese data-sets for ASR to train the speaker verification model.

@magneter
Copy link

magneter commented Aug 3, 2019

@Liujingxiu23 Have you trained a Chinese model?And could you share your model about the Chinese clone results?

@Liujingxiu23
Copy link

@magneter I have not trained the Chinese model, I don't have enough data to train the speaker verification model, I am trying to collect suitable data now

@xw1324832579
Copy link

You'd want hundreds of speakers at least. In fact, LibriSpeech-clean makes for 460 speakers and it's still not enough.

@CorentinJ Hello, ignoring speakers out of training dataset, if I only want to assure the quality and similarity of wav synthesized with speakers in the training dataset(librispeech-clean), how much time (at least) for one speaker do I need for training, maybe 20 minutes or less?

@CorentinJ
Copy link
Owner

maybe 20 minutes or less?

Wouldn't that be wonderful. You'll still need a good week or so. A few hours if you use the pretrained model. Although at this point what you're doing is no longer voice cloning, so you're not really in the right repo for that.

@shawwn
Copy link

shawwn commented Aug 10, 2019

This is something that I have been slowly piecing together. I have been gathering audiobooks and their text versions that are in the public domain (Project Gutenberg & LibriVox Recordings). My goal as of now is to develop a solid package that can gather an audiofile and corresponding book, performing necessary cleaning and such.

Currently this project lives on my C:, but if there's interest in collaboration I'd gladly throw it here on GitHub.

@zbloss I'm very interested. Would you be able to upload your entire dataset somewhere? Or if it's difficult to upload, is there some way I could acquire it from you directly?

Thanks!

@WendongGan
Copy link

@CorentinJ @yaguangtang @tail95 @zbloss @HumanG33k I am finetuning the encoder model by Chhinese data of 3100 persons. I want to know how to judge whether the train of finetune is OK. In Figure0, The blue line is based on 2100 persons , the yellow line is based on 3100 persons which is trained now.
Figure0:
image

Figure1:(finetune 920k , from 1565k to 1610k steps, based on 2100 persons)
image

Figure2:(finetune 45k from 1565k to 1610k steps, based on 3100 persons)
image

I also what to know how mang steps is OK , in general. Because, I only know to train the synthesizer model and vocoder mode oneby one to judge the effect. But it will cost very long time. How about my EER or Loss ? Look forward your reply!

@CorentinJ
Copy link
Owner

If your speakers are cleanly separated in the space (like they are in the pictures), you should be good to go! I'd be interested to compare with the same plots but before any training step was made, to see how the model does on Chinese data.

@CorentinJ CorentinJ changed the title Does it support other languages except English? Support for other languages Aug 30, 2019
@CorentinJ CorentinJ pinned this issue Aug 30, 2019
@carlsLobato
Copy link

I will be trying to do the same with spanish. Wish me luck. Any suggestions about compute power?

did you get around to train the model. I found these datasets in spanish (and many other languages) https://commonvoice.mozilla.org/es/datasets

@abelab1982
Copy link

Any progress on Spanish dataset training?
Algún avance en la formación del conjunto de datos español?

Same here! let me know if any news or any help for Spanish

@carlsLobato
Copy link

Any progress on Spanish dataset training?
Algún avance en la formación del conjunto de datos español?

Same here! let me know if any news or any help for Spanish

Hey, I ended up using tacotron2 implementation by NVIDIA. If you train it in spanish, it speaks spanish; so I guess it will
work just as good in any language https://github.com/NVIDIA/tacotron2

@andreafiandro
Copy link

Hello,
I tried to train the model for the italian languages but I still have some issues.
The steps I followed are:

  • Preprocessing of the dataset http://www.openslr.org/94/
  • Training of the synthetizer
  • Using the synthetizer to generate the input data for the vocoder
  • Train of the vocoder

After a long training (especially for the vocoder) the output generated by means of the toolbox is really poor (it can't "speak" italian).

Did I do something wrong or I missed some steps?

Thank you in advance

@ghost
Copy link

ghost commented Jun 6, 2021

@andreafiandro Check the attention graphs from your synthesizer model training. You should get diagonal lines that look like this if attention has been learned. (This is required for inference to work) https://github.com/Rayhane-mamah/Tacotron-2/wiki/Spectrogram-Feature-prediction-network#tacotron-2-attention

If it does not look like that, you'll need additional training for the synthesizer, check the preprocessing for problems, and/or clean your dataset.

@VitoCostanzo
Copy link

Hello,
I tried to train the model for the italian languages but I still have some issues.
The steps I followed are:

  • Preprocessing of the dataset http://www.openslr.org/94/
  • Training of the synthetizer
  • Using the synthetizer to generate the input data for the vocoder
  • Train of the vocoder

After a long training (especially for the vocoder) the output generated by means of the toolbox is really poor (it can't "speak" italian).

Did I do something wrong or I missed some steps?

Thank you in advance

@andreafiandro please, can you share your file trained for italian language? (pretrained.pt of synthetizer)

@andreafiandro
Copy link

@andreafiandro Check the attention graphs from your synthesizer model training. You should get diagonal lines that look like this if attention has been learned. (This is required for inference to work) https://github.com/Rayhane-mamah/Tacotron-2/wiki/Spectrogram-Feature-prediction-network#tacotron-2-attention

If it does not look like that, you'll need additional training for the synthesizer, check the preprocessing for problems, and/or clean your dataset.

Thank you, I have something really different from expected diagonal line:
attention_step_30000_sample_1

Probably I made some mistake in the data preprocessing or the dataset is too poor. I will try again, checking the results using the plots.

Do I need to edit some configuration file in order to the list of character of my language or I can follow the same training step described here?

@VitoCostanzo I can share the file if you want but it isn't working for the moment

@ghost
Copy link

ghost commented Jun 21, 2021

Do I need to edit some configuration file in order to the list of character of my language or I can follow the same training step described here?

@andreafiandro - "Considerations - languages other than English" in #431 (comment)

@tiomaldy
Copy link

Hello i am trying to train the system in spanish
The first thing i need is train the encoder ,what i need to change in the code or what are the step by step for make the training someone can help me ?

@selcuk-cofe
Copy link

how to train for turkish ?

@babysor
Copy link

babysor commented Aug 7, 2021

Thank you for sharing the zhrtvc pretrained models @windht ! It will not be as obvious in the future, so for anyone else who wants to try, the models work flawlessly with this commit: https://github.com/KuangDD/zhrtvc/tree/932d6e334c54513b949fea2923e577daf292b44e

What I like about zhrtvc:

  • Display alignments for synthesized spectrograms
  • Option to preprocess wavs for making the speaker embedding.
  • Auto-save generated wavs (though I prefer our solution in Export and replay generated wav #402)

Melgan is integrated but it doesn't work well with the default synthesizer model, so I ended up using Griffin-Lim most of the time for testing. WaveRNN quality is not that good either so it might be an issue on my end.

I'm trying to come up with ideas for this repo to support other languages without having to edit files.

All links to KuangDD's projects now are no longer accessible. I'm currently working on latest fork of this repo to support mandarin and if anyone want to use as reference, please be free to folk and train: https://github.com/babysor/Realtime-Voice-Clone-Chinese

@ghost
Copy link

ghost commented Oct 8, 2021

The original issue has been edited to provide visibility of community-developed voice cloning models in other languages. I'll also use it to keep track of requests.

@ghost ghost pinned this issue Oct 11, 2021
@Hiraokii
Copy link

From here:

A particularity of the SV2TTS framework is that all models can be trained
separately and on distinct datasets. For the encoder, one seeks to have a model
that is robust to noise and able to capture the many characteristics of the human
voice. Therefore, a large corpus of many different speakers would be preferable to
train the encoder, without any strong requirement on the noise level of the audios.
Additionally, the encoder is trained with the GE2E loss which requires no labels other
than the speaker identity. (...) For the datasets of the synthesizer and the vocoder,
transcripts are required and the quality of the generated audio can only be as good
as that of the data. Higher quality and annotated datasets are thus required, which
often means they are smaller in size.

You'll need two datasets: image

The first one should be a large dataset of untranscribed audio that can be noisy. Think thousands of speakers and thousands of hours. You can get away with a smaller one if you finetune the pretrained speaker encoder. Put maybe 1e-5 as learning rate. I'd recommend 500 speakers at the very least for finetuning. A good source for datasets of other languages is M-AILABS.

The second one needs audio transcripts and high quality audio. Here, finetuning won't be as effective as for the encoder, but you can get away with less data (300-500 hours). You will likely not have the alignments for that dataset, so you'll have to adapt the preprocessing procedure of the synthesizer to not split audio on silences. See the code and you'll understand what I mean.

Don't start training the encoder if you don't have a dataset for the synthesizer/vocoder, you won't be able to do anything then.

this can be done with some audiobooks?

@rphad23
Copy link

rphad23 commented Nov 20, 2021

when will french be done?

@ugurpekunsal
Copy link

ugurpekunsal commented Dec 25, 2021

how to train for turkish ?

Have you had any luck with training Turkish?

@selcuk-cofe

@neonsecret
Copy link

I've made a custom fork https://github.com/neonsecret/Real-Time-Voice-Cloning-Multilang
It now supports training a bilingual en+ru model, and it's easy to add new languages based on my fork

@Abdelrahman-Shahda
Copy link

@CorentinJ I am planning to use your pre-trained modules to generate English audio but in my case I want my source audio to be Spanish so I should only worry about training the encoder right? And If I wanted to add emotions to the generated voice does the vocoder supports this?

@neonsecret
Copy link

@Abdelrahman-Shahda
no you should train only the synthetizer and edit the symbols.py file, see this #941

@Abdelrahman-Shahda
Copy link

@neonsecret Okay great. For the emotion part should I keep extracting the embedding each time rather than once for a single user(I don't know if this will cause the encoder embeddings to vary based on the emotions)

@neonsecret
Copy link

@Abdelrahman-Shahda I think you should just train as normal, if your emotional audio has exclamation signs in transcript (like "hello!" or "hello!!") you should be fine.

Repository owner deleted a comment from ALIXGUSTAF May 9, 2022
Repository owner deleted a comment from ALIXGUSTAF May 9, 2022
@pauortegariera
Copy link

Hi everyone , i would like to know how much training time does every module requires using GPU (approx.).

@keshawnhsieh
Copy link

I upload the latest pretained model on steps 183W newplot newplot1 0505_umap_1830000

Where did you put your pretrained model on? Seems not see any links on your forked repo? @iwater

@keshawnhsieh
Copy link

@CorentinJ @yaguangtang @tail95 @zbloss @HumanG33k I am finetuning the encoder model by Chhinese data of 3100 persons. I want to know how to judge whether the train of finetune is OK. In Figure0, The blue line is based on 2100 persons , the yellow line is based on 3100 persons which is trained now. Figure0: image

Figure1:(finetune 920k , from 1565k to 1610k steps, based on 2100 persons) image

Figure2:(finetune 45k from 1565k to 1610k steps, based on 3100 persons) image

I also what to know how mang steps is OK , in general. Because, I only know to train the synthesizer model and vocoder mode oneby one to judge the effect. But it will cost very long time. How about my EER or Loss ? Look forward your reply!

Could you plz share the Chinese encoder model with me? @UESTCgan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests