-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue while running SD1.5 on multiple less beefy GPUs #7682
Comments
did you try setting something like - max_memory = {0:"1GB", 1:"1GB"} ? |
Yes I tried this as well! Same error! |
Cc: @SunMarc |
Could you also share the output of |
Hi @square-1111, the issue is due to a breaking changes in the latest accelerate. If you use accelerate=0.27.0, it should work fine. We will do a patch release to enable diffusers to fix this issue. See related issue. |
Wed Apr 17 10:45:32 2024 +-----------------------------------------------------------------------------+ |
I tried this. Got this issue
|
What did you do? |
Installed accelerate=0.27.0 |
Can you try uninstalling
|
Still the same issue! |
|
I will defer to @SunMarc to comment further then. |
Hi @square-1111 , can you try installing accelerate=0.28.0 instead ? I didn't get the error because I check out directly to the commit tagged v0.27 |
Hi @SunMarc it works perfectly with |
Hi @square-1111, we can't guarantee that during inference, you won't get OOM error. You can use |
I tried this but inference with this is slower than CPU inference. xD |
That is expected as it involves data movement across devices. |
Describe the bug
I am trying to run distributed inference for SD1.5 and SDXL on 2xGTX 1080 Ti. But facing some issues
Reproduction
command to run : CUDA_VISIBLE_DEVICES="0,1" python sd15_inference.py
Logs
System Info
diffusers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
diffusers
version: 0.28.0.dev0Who can help?
@sayakpaul
The text was updated successfully, but these errors were encountered: