This project is a part of Mozilla Common Voice. TTS aims a deep learning based Text2Speech engine, low in cost and high in quality. To begin with, you can hear a sample generated voice from here.
TTS includes two different model implementations which are based on Tacotron and Tacotron2. Tacotron is smaller, efficient and easier to train but Tacotron2 provides better results, especially when it is combined with a Neural vocoder. Therefore, choose depending on your project requirements.
If you are new, you can also find here a brief post about TTS architectures and their comparisons.
Highly recommended to use miniconda for easier installation.
- python>=3.6
- pytorch>=0.4.1
- librosa
- tensorboard
- tensorboardX
- matplotlib
- unidecode
Install TTS using setup.py
. It will install all of the requirements automatically and make TTS available to all the python environment as an ordinary python module.
python setup.py develop
Or you can use requirements.txt
to install the requirements only.
pip install -r requirements.txt
A barebone Dockerfile
exists at the root of the project, which should let you quickly setup the environment. By default, it will start the server and let you query it. Make sure to use nvidia-docker
to use your GPUs. Make sure you follow the instructions in the server README
before you build your image so that the server can find the model within the image.
docker build -t mozilla-tts .
nvidia-docker run -it --rm -p 5002:5002 mozilla-tts
Check out here to compare the samples (except the first) below.
Models | Dataset | Commit | Audio Sample | Details |
---|---|---|---|---|
Tacotron-iter-62410 | LJSpeech | 99d56f7 | link | First model with plain Tacotron implementation. |
Tacotron-iter-170K | LJSpeech | e00bc66 | link | More stable and longer trained model. |
Tacotron-iter-270K | LJSpeech | 256ed63 | link | Stop-Token prediction is added, to detect end of speech. |
Tacotron-iter-120K | LJSpeech | bf7590 | link | Better for longer sentences |
Tacotron-iter-108K | TWEB | 2810d57 | link | mozilla/TTS#22 |
Tacotron-iter-185K | LJSpeech | db7f3d3 | link | link |
Tacotron2-iter-260K | LJSpeech | 824c091 | soundcloud | link |
Below you see model state after 16K iterations with batch-size 32.
"Recent research at Harvard has shown meditating for as little as 8 weeks can actually increase the grey matter in the parts of the brain responsible for emotional regulation and learning."
Audio output: https://soundcloud.com/user-565970875/iter16k-f48c3b
The most time-consuming part is the vocoder algorithm (Griffin-Lim) which runs on CPU. By setting its number of iterations, you might have faster execution with a small loss of quality. Some of the experimental values are below.
Sentence: "It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent."
Audio length is approximately 6 secs.
Time (secs) | System | # GL iters |
---|---|---|
2.00 | GTX1080Ti | 30 |
3.01 | GTX1080Ti | 60 |
TTS provides a generic dataloder easy to use for new datasets. You need to write an adaptor to format and that's all you need.Check datasets/preprocess.py
to see example adaptors. After you wrote an adaptor, you need to set dataset
field in config.json
. Do not forget other data related fields.
Example datasets, we successfully applied TTS, are linked below.
Click Here for hands-on Notebook example, training LJSpeech.
Split metadata.csv
into train and validation subsets respectively metadata_train.csv
and metadata_val.csv
. Note that having a validation split does not work well as oppose to other ML problems since at the validation time model generates spectrogram slices without "Teacher-Forcing" and that leads misalignment between the ground-truth and the prediction. Therefore, validation loss does not really show the model performance. Rather, you might use all data for training and check the model performance by relying on human inspection.
shuf metadata.csv > metadata_shuf.csv
head -n 12000 metadata_shuf.csv > metadata_train.csv
tail -n 1100 metadata_shuf.csv > metadata_val.csv
To train a new model, you need to define your own config.json
file (check the example) and call with the command below. You also set the model architecture in config.json
.
train.py --config_path config.json
To fine-tune a model, use --restore_path
.
train.py --config_path config.json --restore_path /path/to/your/model.pth.tar
For multi-GPU training use distribute.py
. It enables process based multi-GPU training where each process uses a single GPU.
CUDA_VISIBLE_DEVICES="0,1,4" distribute.py --config_path config.json
Each run creates a new output folder and config.json
is copied under this folder.
In case of any error or intercepted execution, if there is no checkpoint yet under the output folder, the whole folder is going to be removed.
You can also enjoy Tensorboard, if you point the Tensorboard argument--logdir
to the experiment folder.
Best way to test your network is to use Notebooks under notebooks
folder.
-
Discourse Forums - If your question is not addressed in the Wiki, the Discourse Forums is the next place to look. They contain conversations on General Topics, Using TTS, and TTS Development.
-
Issues - Finally, if all else fails, you can open an issue in our repo.
If you train TTS with LJSpeech dataset, you start to hear reasonable results after 12.5K iterations with batch size 32. This is the fastest training with character-based methods up to our knowledge. Out implementation is also quite robust against long sentences.
- Location sensitive attention (ref). Attention is a vital part of text2speech models. Therefore, it is important to use an attention mechanism that suits the diagonal nature of the problem where the output strictly aligns with the text monotonically. Location sensitive attention performs better by looking into the previous alignment vectors and learns diagonal attention more easily. Yet, I believe there is a good space for research at this front to find a better solution.
- Attention smoothing with sigmoid (ref). Attention weights are computed by normalized sigmoid values instead of softmax for sharper values. That enables the model to pick multiple highly scored inputs for alignments while reducing the noise.
- Weight decay (ref). After a certain point of the training, you might observe the model over-fitting. That is, the model is able to pronounce words probably better but the quality of the speech quality gets lower and sometimes attention alignment gets disoriented.
- Stop token prediction with an additional module. The original Tacotron model does not propose a stop token to stop the decoding process. Therefore, you need to use heuristic measures to stop the decoder. Here, we prefer to use additional layers at the end to decide when to stop.
- Applying sigmoid to the model outputs. Since the output values are expected to be in the range [0, 1], we apply sigmoid to make things easier to approximate the expected output distribution.
- Phoneme based training is enabled for easier learning and robust pronunciation. It also makes easier to adapt TTS to the most languages without worrying about language specific characters.
- Configurable attention windowing at inference-time for robust alignment. It enforces network to only consider a certain window of encoder steps per iteration.
- Detailed Tensorboard stats for activation, weight and gradient values per layer. It is useful to detect defects and compare networks.
- Constant history window. Instead of using only the last frame of predictions, define a constant history queue. It enables training with gradually decreasing prediction frame (r=5 --> r=1) by only changing the last layer. For instance, you can train the model with r=5 and then fine-tune it with r=1 without any performance loss. It also solves well-known PreNet problem #50.
- Initialization of hidden decoder states with Embedding layers instead of zero initialization.
One common question is to ask why we don't use Tacotron2 architecture. According to our ablation experiments, nothing, except Location Sensitive Attention, improves the performance, given the increase in the model size.
Please feel free to offer new changes and pull things off. We are happy to discuss and make things better.
- Implement the model.
- Generate human-like speech on LJSpeech dataset.
- Generate human-like speech on a different dataset (Nancy) (TWEB).
- Train TTS with r=1 successfully.
- Enable process based distributed training. Similar to (https://github.com/fastai/imagenet-fast/).
- Adapting Neural Vocoder. TTS works with (https://github.com/erogol/WaveRNN)
- Multi-speaker embedding.
- Efficient Neural Audio Synthesis
- Attention-Based models for speech recognition
- Generating Sequences With Recurrent Neural Networks
- Char2Wav: End-to-End Speech Synthesis
- VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop
- WaveRNN
- Faster WaveNet
- Parallel WaveNet
- https://github.com/keithito/tacotron (Dataset and Test processing)
- https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture)