The WaveNet neural network architecture directly generates a raw audio waveform, showing excellent results in text-to-speech and general audio generation.
Moreover It can be used almost all sequence generation even text or image.
This repository provides some works related to WaveNet.
- Local conditioning
- Generalized fast generation algorithm
- Mixture of discretized logistics loss
- Parallel Wavenet
We generalized Fast wavenet to filter width > 1 by using Multi-Queue structured matrix which has size of (Dilation x (filter_width - 1) x batch_size x channel_size).
When you generate a sample, you must feed the number of samples that have generated to the function. This is because the queue has to choose before queueing operation.
You can find easily the modified points and details of the algorithm in here.
Check the usage of the incremental generator in here.
Neural Vocoder can generate high quality raw speech samples conditioned on linguistic or acoustic features.
We tested our model followed @r9y9's works.
Audio samples are available at https://twidddj.github.io/docs/vocoder. See the issue for the result in here.
Model URL | Data | Steps |
---|---|---|
link | LJSpeech | 680k steps |
- the voice conversion dataset(for multi speaker, 16k): cmu_arctic
- the single speaker dataset(22.05k): LJSpeech-1.0
python -m apps.vocoder.preprocess --num_workers 4 --name ljspeech --in_dir /your_path/LJSpeech-1.0 --out_dir /your_outpath/
python -m apps.vocoder.train --metadata_path {~/yourpath/train.txt} --data_path {~/yourpath/npy} --log_dir {~/log_dir_path}
You can find the codes for testing trained model in here.
Code is tested on TensorFlow version 1.4 for Python 3.6.