-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem about cuda-out-of-memory #3
Comments
Hi @Chelsea-abab , Reducing the GPU consumption is a quite general problem. Here are some ideas to achieve that:
Or, for the finetuning of the diffusion model, you may also try to find other repositories that include much more efficient implementation. Hope these ideas may help you to address the problem. |
OK, thanks for your reply! I thought there are some hyperparameters could be adjusted to reduce the GPU consumption. Then I will try these general methods to address this problem. Thanks again! |
@Chelsea-abab did u solve this problem? |
Hi,
I want to reproduce your results via your provided codes. But I was stuck in the fine-tuning section. No matter how I reduce the batch size and input image size, it still says cuda out of memory. I run the codes by your instructions on a nvidia3090 gpu with 24g memory. But it seems that before the program loads the images, all the memory has already been allocated. So although I reduce the batch size to 1 and input size to 64, the cuda out of memory problem is still there. Does the memory be allocated to the models? Do you have any idea on how to solve this problem? I guess 24g is enough to tune a diffusion model.
The text was updated successfully, but these errors were encountered: