-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase in GPU memory usage with Pytorch-Lightning #1376
Labels
Comments
VitorGuizilini
added
bug
Something isn't working
help wanted
Open to be worked on
labels
Apr 4, 2020
Hi! thanks for your contribution!, great first issue! |
Borda
added
feature
Is an improvement or enhancement
information needed
and removed
bug
Something isn't working
labels
Apr 5, 2020
Hi @vguizilini could you be more specific how much more memory is required? |
@jeremyjordan can we get that memory profiler? |
@neggert or @williamFalcon any ideas why GPU memory isn't consistent across the nodes? |
Following up on this issue, is there anything else I should provide to facilitate debugging? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Over the last week I have been porting my code on monocular depth estimation to Pytorch-Lightning, and everything is working perfectly. However, my models seem to require more GPU memory than before, to the point where I need to significantly decrease batch size at training time. These are the Trainer parameters I am using, and relevant versions:
Because of that (probably) I am having issues replicating my results, could you please advise on possible solutions? I will open-source the code as soon as I manage to replicate current results.
The text was updated successfully, but these errors were encountered: