-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
4 gpus works fine while 8 gpus causes memory leakage and runs slow? #58
Comments
Did you mean your GCC is 5.2.0? Also, did you compile pytorch from source or used conda? I have the same configs but get errors when trying to run. |
@Spandan-Madan , yes, gcc is 5.2.0. I use conda to install pytorch and test my own code with sync inplace abn incorporated. |
I get the same problem. |
same problem. |
same problem |
@hszhao @Spandan-Madan @ackness @wangjingbo1219 @KeyKy we have released today a new version of our code that requires PyTorch v1.0 and supports multi-process synchronization. This solves problems with training on more than 4 Gpus. |
Closing because the new version should have solved the issue. |
Thank you, the newest version of |
I tested sync inplace abn on 8 x GTX TITAN and 8 x TITAN Xp machine. In both case, using 4 gpus works well while memory leakage appears when 8 gpus are adopted.
gpu: GeForce GTX TITAN and TITAN Xp
cuda: 8.0
pytorch: 0.4.1
gcc: 5.2.0
The text was updated successfully, but these errors were encountered: