Using LVCNet to design the generator of Parallel WaveGAN and the same strategy to train it, the inference speed of the new vocoder is more than 5x faster than the original vocoder without any degradation in audio quality.
Our current works [Paper] has been accepted by ICASSP2021, and our previous works were described in Melglow.
-
prepare the data, download
LJSpeech
dataset from https://keithito.com/LJ-Speech-Dataset/, and save it indata/LJSpeech-1.1
. Then runpython -m vocoder.preprocess --data-dir ./data/LJSpeech-1.1 --config configs/lvcgan.v1.yaml
The mel-sepctrums are calculated and saved in the folder
temp/
. -
Training LVCNet
python -m vocoder.train --config configs/lvcgan.v1.yaml --exp-dir exps/exp.lvcgan.v1
-
Test LVCNet
python -m vocoder.test --config configs/lvcgan.v1.yaml --exp-dir exps/exp.lvcgan.v1
-
The experimental results, including training logs, model checkpoints and synthesized audios, are stored in the folder
exps/exp.lvcgan.v1/
.
Similarity, you can also use the config fileconfigs/pwg.v1.yaml
to train a Parallel WaveGAN model.# training python -m vocoder.train --config configs/pwg.v1.yaml --exp-dir exps/exp.pwg.v1 # test python -m vocoder.test --config configs/pwg.v1.yaml --exp-dir exps/exp.pwg.v1
Use the tensorboard to view the experimental training process:
tensorboard --logdir exps
Audio Samples are saved in samples/
, where
samples/*_lvc.wav
are generated by LVCNet,samples/*_pwg.wav
are generated by Parallel WaveGAN,samples/*_real.wav
are the real audio.
LVCNet: Efficient Condition-Dependent Modeling Network for Waveform Generation, https://arxiv.org/abs/2102.10815
MelGlow: Efficient Waveform Generative Network Based on Location-Variable Convolution, https://arxiv.org/abs/2012.01684
https://github.com/kan-bayashi/ParallelWaveGAN
https://github.com/lmnt-com/diffwave