-
Notifications
You must be signed in to change notification settings - Fork 910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GTX1070 and 8G memory cause memory dump #239
Comments
Try to lower tacotron_batch_size or increase outputs_per_step in hparams.py |
I am already using batch size of 8. I am just wondering if there is anything wrong with logic as the memory consuming is really big. |
Hi, 8Gb are enough to hold batch size of 32 and ouputs_per_step=2. If using
a custom dataset, it's possible you're using very long utterances. I
usually recommend using wavs between 3 and 14 seconds long.
To avoid using long utterances you can decrease "max_mel_frames" until OOM
is gone. Please also ensure your GPU is as free as possible when launching
training.
…On Thu, 11 Oct 2018, 13:46 JinTian, ***@***.***> wrote:
I am already using batch size of 8. I am just wondering if there is
anything wrong with logic as the memory consuming is really big.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#239 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AhFSwLJb1rtAZN5Kb8HtxXMrqbNK5aMeks5ujz2_gaJpZM4XXboP>
.
|
@Rayhane-mamah Thanks for you reply. I am just training on LJ Speech dataset. But always out of memory. I will test decrease the max_mel_frames. My GPU is free |
@jinfagang Hello! I use custom dataset, had to set batch_size to 32, because if I set batch_size to 8, the model doesn't seem to converge (probably it converges too slowly). @Rayhane-mamah commented about batch_size and max_mel_frames And huge Q&A about these
PS I fixed OOMs by lowering qudio quality to 16Khz, setting outputs_per_step to 3 and lowering max_mel_frames to 500. After these changes GPU RAM consumption is 5.5Gb, I'm okay with that. (I have gtx 1080 with 8Gb). UPD: I've read out loud what I've done, looked for long sentences skipped by max_mel_frames, divided them, and returned max_mel_frames back to deafult 1000. So the only change besides lowering audio to 16Khz is outputs_per_step=3. Memory consumption is 7.5 Gb now. Thanks for your question :) |
Does this really consume memory? Do I need update my compute to run this?
The text was updated successfully, but these errors were encountered: