-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much memory needed for training? #1
Comments
I train by GPU with 6G memory. |
Batch size is 5, the default value. |
I don't know how to resolve this issue. And probably this could help you. |
Ok, thank you. I just found 48G memory was fully used. |
Hi hwd8868! I also run into the same error as you did. I was using CPU with memory around 20 GB, but my maximum usage was only reached to about 45%. I wonder how did you solve your issue, and if you possibly have any suggestions? Thank you! |
I used tensorflow-CPU for training dataset with 48G memory. But I get memory fault after training for several hours :
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
The text was updated successfully, but these errors were encountered: