-
Notifications
You must be signed in to change notification settings - Fork 910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please Help me about this #104
Comments
|
why train_step is in default, whether the result of loss is good if train step above default or under default ? I've tried using ljspeech dataset, why only 1 sentence is generated in wavenet? how to use the model? |
Hello @ilhamprayudha, please refer to the README for information on how to use the model (both during training and synthesis). like @cobr123 said, "max_mel_frames" is responsible for limiting data by length so you don't get OOM errors during training. Test out different max_mel_frames values for what suits you best. You can also change "outputs_per_step" to omit OOM errors by making it bigger (I don't recommend going bigger than 3). It is also recommended to keep batch_size=32. Can you give clear explanation about: |
I generate take along time
|
@ilhamprayudha I provided an answer here. This repo speed: 1 second of audio is generated in 62 seconds. |
in your new repo is add preprocessing_wavenet why ? is the gta used better than synthesized tacotron-2 i try your new repo and in preprocess is |
Raw wavenet gives an overall better audio quality than the mulaw-quanitize but it is slower to converge. wavenet_preprocess was added as a way to use Wavenet in a standalone fashion in case one is not interested in the Tacotron part. Please make sure you have the same numpy version as in the requirements file. |
my numpy is same, even though such proceedings continue this is also what happens when executing `Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): Caused by op 'datafeeder/input_queue_enqueue', defined at: CancelledError (see above for traceback): Enqueue operation was cancelled Exception in thread background: During handling of the above exception, another exception occurred: Traceback (most recent call last): Caused by op 'datafeeder/eval_queue_enqueue', defined at: CancelledError (see above for traceback): Enqueue operation was cancelled ` |
@ilhamprayudha, according to this issue, it appears that your numpy warning can be ignored safely, as for the enqueue canceling, it happens when a bug occurs during training and the feeder drops all queues. The log you provided doesn't hold the main reason of the crash though so I can't really say what's going on with your run. (useful crash log is usually the first lines of the crash log). Try maybe running the training and redirecting all "stdout" to a log file so that we can get the entire crash log then send it to me. thanks :) |
I got exactly the same error as @ilhamprayudha described above using model
|
My log in console is
|
I'm training a fresh new model without using pre-trained model. |
Found the solution here relationshio of hop size and scale prod. Thank you ! |
1.is there any limit on data when preprocessing ?
I use the dataset with 15438 when in preprocessing it only produces 12988
The text was updated successfully, but these errors were encountered: