The Implementation of FastSpeech Based on Pytorch.
- Fix bugs in alignment;
- Fix bugs in transformer;
- Fix bugs in LengthRegulator;
- Change the way to process audio;
- Use squeezewave to synthesize.
- python 3.6
- CUDA 10.0
- pytorch==1.1.0
- nump>=1.16.2
- scipy>=1.2.1
- librosa>=0.7.2
- inflect>=2.1.0
- matplotlib>=2.2.2
- Download and extract LJSpeech dataset.
- Put LJSpeech dataset in
data
. - Unzip
alignments.zip
* - Put pretrained squeezewave model in the
squeezewave/pretrained_model
; - Run
python preprocess.py
.
* if you want to calculate alignment, don't unzip alignments.zip and put Nvidia pretrained Tacotron2 model in the Tacotron2/pretrained_model
Run python train.py
.
Run python synthesis.py "write your TTS Here"
.
Intel® Core™ i5-6300U CPU
example 1
taskset --cpu-list 1 python3 synthesis.py "Fastspeech with Squeezewave vocoder in pytorch , very fast inference on cpu"
Speech synthesis time:
1.7220683097839355
soxi out:
Input File : 'results/Fastspeech with Squeezewave vocoder in pytorch , very fast inference on cpu_112000_squeezewave.wav'
Channels : 1
Sample Rate : 22050
Precision : 16-bit
Duration : 00:00:05.96 = 131328 samples ~ 446.694 CDDA sectors
File Size : 263k
Bit Rate : 353k
Sample Encoding: 16-bit Signed Integer PCM
approx. 6 sec. audio output in 1.72 sec on single cpu
example 2
taskset --cpu-list 0 python3 synthesis.py "How are you"
Speech synthesis time:
0.3431851863861084
soxi out:
Input File : 'results/How are you _112000_squeezewave.wav'
Channels : 1
Sample Rate : 22050
Precision : 16-bit
Duration : 00:00:00.85 = 18688 samples ~ 63.5646 CDDA sectors
File Size : 37.4k
Bit Rate : 353k
Sample Encoding: 16-bit Signed Integer PCM
0.85 sec. audio output in 0.34 sec on single cpu
- Baidu: Step:112000 Enter Code: xpk7
- OneDrive: Step:112000
- In the paper of FastSpeech, authors use pre-trained Transformer-TTS to provide the target of alignment. I didn't have a well-trained Transformer-TTS model so I use Tacotron2 instead.
- The examples of audio are in
results
. - The outputs and alignment of Tacotron2 are shown as follows (The sentence for synthesizing is "I want to go to CMU to do research on deep learning."):
- The outputs of FastSpeech and Tacotron2 (Right one is tacotron2) are shown as follows (The sentence for synthesizing is "Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition."):
- The Implementation of Tacotron Based on Tensorflow
- The Implementation of Transformer Based on Pytorch
- The Implementation of Transformer-TTS Based on Pytorch
- The Implementation of Tacotron2 Based on Pytorch
- The Implementation of Squeezewave Based on Pytorch
- SqueezeWave: Extremely Lightweight Vocoders for On-device Speech Synthesis
- FastSpeech: Fast, Robust and Controllable Text to Speech