Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GTX1070 and 8G memory cause memory dump #239

Open
lucasjinreal opened this issue Oct 11, 2018 · 5 comments
Open

GTX1070 and 8G memory cause memory dump #239

lucasjinreal opened this issue Oct 11, 2018 · 5 comments

Comments

@lucasjinreal
Copy link

Does this really consume memory? Do I need update my compute to run this?

@alexdemartos
Copy link

Try to lower tacotron_batch_size or increase outputs_per_step in hparams.py

@lucasjinreal
Copy link
Author

I am already using batch size of 8. I am just wondering if there is anything wrong with logic as the memory consuming is really big.

@Rayhane-mamah
Copy link
Owner

Rayhane-mamah commented Oct 11, 2018 via email

@lucasjinreal
Copy link
Author

@Rayhane-mamah Thanks for you reply. I am just training on LJ Speech dataset. But always out of memory. I will test decrease the max_mel_frames. My GPU is free

@gloriouskilka
Copy link

gloriouskilka commented Oct 15, 2018

@jinfagang Hello! I use custom dataset, had to set batch_size to 32, because if I set batch_size to 8, the model doesn't seem to converge (probably it converges too slowly).

@Rayhane-mamah commented about batch_size and max_mel_frames
#104 (comment)
#206 (comment)

And huge Q&A about these
#226 (comment)

If your batch_size is small, increase it to 32. Then increase outputs_per_step to 3 if you face OOM. It OOM is still present, decrase max_mel_frames until the problem is fixed. Note that increasing outputs_per_step and decreasing max_mel_frames will impact model quality negatively. That's why we recommend using proper hardware (at least 8GB GPU).

PS I fixed OOMs by lowering qudio quality to 16Khz, setting outputs_per_step to 3 and lowering max_mel_frames to 500. After these changes GPU RAM consumption is 5.5Gb, I'm okay with that. (I have gtx 1080 with 8Gb).

UPD: I've read out loud what I've done, looked for long sentences skipped by max_mel_frames, divided them, and returned max_mel_frames back to deafult 1000. So the only change besides lowering audio to 16Khz is outputs_per_step=3. Memory consumption is 7.5 Gb now.

Thanks for your question :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants