-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Got black image when trying to use the SD model 2.1 #5503
Comments
If you use |
Same issue here, with Windows 10. :-( |
Stability-AI/stablediffusion@c12d960 seems relevant. we may need to export some environment variable to enable |
Use the v2-inference-v.yaml mentioned above. Use this file for the 768 model only, and the https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml (without -v) for the 512 model. Copy it besides your checkpoint file and give it the same name but with yaml extension. |
Theoretically there shouldn't be an issue with using SD 2.1 if SD 2.0 already worked without |
Solution here #5506 |
@miguelgargallo Adding |
I did some more testing and I found another way to fix it! If you enable xformers with |
You could try setting the following environment variable.
and additionally if you want to use half-precision
So for example for the webui-user.bat
This should checkout the stablediffusion-repository with the specified commit on the next launch. And "8bde0cf64f3735bb33d93bdb8e28120be45c479b" specifically is the commit that adds the ATTN_PRECISION environment variable (see Stability-AI/stablediffusion@8bde0cf). Works for me, but my local fork is a bit diverged from the current master. So someone should retest this. :) |
I can confirm black images only happen on 768 models for 2.1 and 2.0. 512 models doesn't produce black images except maybe for GTX 10xx models like before. I really don't have to use --no-half before and I probably can't since I only have 4GB ram. Well I can if I use --lowvram but yeah, I really don't have to before on pre 2.0 models. |
we
Where do we put this? STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b" |
I'm on a 1060 6GB, and the v2.1 512 model was returning images while the v2.1 768 model needed additional work to not end up blank. Turning xformers back on did allow the 768 model to properly generate an image for me. Considering almost all my VRAM is used while generating, --no-half probably isn't a viable solution without other flags which would slow the process for me. Summary: xformers makes the 768 model function on my hardware. |
In whatever script you use to launch the webui. So the webui-user.bat could look something like this (remember to set your COMMANDLINE_ARGS)
Tried xformers with the 768 model before switchting the commit hash. Which worked fine for lower resolutions, but for unusually large pictures like 1920x1080 i kept consistently getting a black screen. I'm on a RTX 3090. |
Yes, Had the same issue and xformers fixed it. |
Results are also oversaturated or deepfried somehow, maybe it's because of v-prediction? |
Any given code to any file that fixes the project, it is sufficient to PR, And i also argument and super document all the steps |
If you have an AMD card you can't use xformers and full precision will just run out of memory when doing 768x768, even though I have 16gb vram. |
I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960) However even after using the latest version and setting this to fp16 I still get black images. |
you meant this commit with usage of ATTN_PRECISION? Stability-AI/stablediffusion@e1797ae |
@RainfoxAri listed the example here in the wiki. Right or wrong? does it need that commit hash to work properly? It is confusing to those wanting to run in fp16 mode without --xformers. |
--xformers does not work for me at all; it crashes with
however, even not putting "--xformers" in doesn't work, I have to pip uninstall it. So there needs to be some code cleanup on this front. --no-half on the other hand works fine for me. |
I have spent the last 12 hours trying to recompile xformers because mine got zapped. On 1.5 I was done in 25m now all kinds of hell so I just said to hell with it only to find that 2.1 gives my 1060 6gb a solid black 768x768 image without xformers. Since doing --xformers does NOT work for my Pascal, since day one it was introduced, I decided to ditch it only to get to this issue. |
Has this been fixed? Still getting black images with 768px 2.1, I can't use no half so looking for another way. |
Either you use xformers or you use --no-half or fall back to 2.0. Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue. |
Seems a bit of an odd decisions given that xformers is nvidia only. |
Hence the --no-half flag which I believe AMD can do. Personally, my hopes are that RDNA4 swings this around, so we no longer need Nvidia and its BS. CUDA is the only reason I stay with NVIDIA. |
Closing as stale. |
Is there an existing issue for this?
What happened?
Got black image when trying to use the latest SD 2.1 model, even though I copy the v2-inference-v.yaml file and rename to the [model-name].yaml
Steps to reproduce the problem
as discribed above
What should have happened?
should generate image as prompted
Commit where the problem happens
44c46f0
What platforms do you use to access UI ?
Linux
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
Additional information, context and logs
No response
The text was updated successfully, but these errors were encountered: