Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Got black image when trying to use the SD model 2.1 #5503

Closed
1 task done
ruradium opened this issue Dec 7, 2022 · 28 comments
Closed
1 task done

[Bug]: Got black image when trying to use the SD model 2.1 #5503

ruradium opened this issue Dec 7, 2022 · 28 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@ruradium
Copy link

ruradium commented Dec 7, 2022

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Got black image when trying to use the latest SD 2.1 model, even though I copy the v2-inference-v.yaml file and rename to the [model-name].yaml

Steps to reproduce the problem

as discribed above

What should have happened?

should generate image as prompted

Commit where the problem happens

44c46f0

What platforms do you use to access UI ?

Linux

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--api --listen --no-half-vae

Additional information, context and logs

No response

@ruradium ruradium added the bug-report Report of a bug, yet to be confirmed label Dec 7, 2022
@ProGamerGov
Copy link
Contributor

If you use --no-half it will work, but then it also requires a lot more VRAM to generate larger images.

@bsalberto77
Copy link

Same issue here, with Windows 10. :-(

@nousr
Copy link

nousr commented Dec 7, 2022

--no-half --no-half-vae --api --listen works for me...but...

Stability-AI/stablediffusion@c12d960 seems relevant. we may need to export some environment variable to enable fp16 for 2.1

@djdookie
Copy link

djdookie commented Dec 7, 2022

Use the v2-inference-v.yaml mentioned above. Use this file for the 768 model only, and the https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml (without -v) for the 512 model. Copy it besides your checkpoint file and give it the same name but with yaml extension.

@ProGamerGov
Copy link
Contributor

Theoretically there shouldn't be an issue with using SD 2.1 if SD 2.0 already worked without --no-half , so I'm not sure why its broken.

@miguelgargallo
Copy link

miguelgargallo commented Dec 7, 2022

Solution here #5506

@ProGamerGov
Copy link
Contributor

ProGamerGov commented Dec 7, 2022

@miguelgargallo Adding --no-half isn't really a PR worthy fix as it should work without that argument.

@ProGamerGov
Copy link
Contributor

I did some more testing and I found another way to fix it!

If you enable xformers with --xformers, then you don't have to use --no-half!

@RainfoxAri
Copy link

You could try setting the following environment variable.

STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

and additionally if you want to use half-precision

ATTN_PRECISION=fp16

So for example for the webui-user.bat

set STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
set ATTN_PRECISION=fp16

This should checkout the stablediffusion-repository with the specified commit on the next launch. And "8bde0cf64f3735bb33d93bdb8e28120be45c479b" specifically is the commit that adds the ATTN_PRECISION environment variable (see Stability-AI/stablediffusion@8bde0cf).

Works for me, but my local fork is a bit diverged from the current master. So someone should retest this. :)

@ghost
Copy link

ghost commented Dec 7, 2022

I can confirm black images only happen on 768 models for 2.1 and 2.0. 512 models doesn't produce black images except maybe for GTX 10xx models like before. I really don't have to use --no-half before and I probably can't since I only have 4GB ram. Well I can if I use --lowvram but yeah, I really don't have to before on pre 2.0 models.

@ghost
Copy link

ghost commented Dec 7, 2022

we

You could try setting the following environment variable.

STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

and additionally if you want to use half-precision

ATTN_PRECISION=fp16

So for example for the webui-user.bat

set STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
set ATTN_PRECISION=fp16

This should checkout the stablediffusion-repository with the specified commit on the next launch. And "8bde0cf64f3735bb33d93bdb8e28120be45c479b" specifically is the commit that adds the ATTN_PRECISION environment variable (see Stability-AI/stablediffusion@8bde0cf).

Works for me, but my local fork is a bit diverged from the current master. So someone should retest this. :)

Where do we put this? STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

@MegaScience
Copy link

I'm on a 1060 6GB, and the v2.1 512 model was returning images while the v2.1 768 model needed additional work to not end up blank. Turning xformers back on did allow the 768 model to properly generate an image for me. Considering almost all my VRAM is used while generating, --no-half probably isn't a viable solution without other flags which would slow the process for me.

Summary: xformers makes the 768 model function on my hardware.

@RainfoxAri
Copy link

RainfoxAri commented Dec 7, 2022

Where do we put this? STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

In whatever script you use to launch the webui.
So for windows most likely webui-user.bat, for linux most likely webui-user.sh.

So the webui-user.bat could look something like this (remember to set your COMMANDLINE_ARGS)

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=your command line options
set STABLE_DIFFUSION_COMMIT_HASH="c12d960d1ee4f9134c2516862ef991ec52d3f59e"
set ATTN_PRECISION=fp16

call webui.bat

Summary: xformers makes the 768 model function on my hardware.

Tried xformers with the 768 model before switchting the commit hash. Which worked fine for lower resolutions, but for unusually large pictures like 1920x1080 i kept consistently getting a black screen. I'm on a RTX 3090.

@curtwagner1984
Copy link

I did some more testing and I found another way to fix it!

If you enable xformers with --xformers, then you don't have to use --no-half!

Yes, Had the same issue and xformers fixed it.

@ghost
Copy link

ghost commented Dec 7, 2022

Results are also oversaturated or deepfried somehow, maybe it's because of v-prediction?

@miguelgargallo
Copy link

miguelgargallo commented Dec 7, 2022

@miguelgargallo Adding --no-half isn't really a PR worthy fix as it should work without that argument.

Any given code to any file that fixes the project, it is sufficient to PR, And i also argument and super document all the steps

@CapsAdmin
Copy link

If you have an AMD card you can't use xformers and full precision will just run out of memory when doing 768x768, even though I have 16gb vram.

@CapsAdmin
Copy link

I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960)

However even after using the latest version and setting this to fp16 I still get black images.

@fractal-fumbler
Copy link

I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960)

you meant this commit with usage of ATTN_PRECISION? Stability-AI/stablediffusion@e1797ae

@ClashSAN
Copy link
Collaborator

@RainfoxAri listed the example here in the wiki. Right or wrong? does it need that commit hash to work properly? It is confusing to those wanting to run in fp16 mode without --xformers.

@ClashSAN ClashSAN reopened this Dec 10, 2022
@OWKenobi
Copy link
Contributor

--xformers does not work for me at all; it crashes with

  NotImplementedError: Could not run 'xformers::efficient_attention_forward_cutlass' with arguments from the 'CUDA' backend. 

however, even not putting "--xformers" in doesn't work, I have to pip uninstall it. So there needs to be some code cleanup on this front. --no-half on the other hand works fine for me.

@asimard1
Copy link
Contributor

@OWKenobi I get this same error, it's very frustrating! See issue #5427 for more info (for you and others), but there doesn't seem to be a solution for now.

@DarkAlchy
Copy link

I have spent the last 12 hours trying to recompile xformers because mine got zapped. On 1.5 I was done in 25m now all kinds of hell so I just said to hell with it only to find that 2.1 gives my 1060 6gb a solid black 768x768 image without xformers. Since doing --xformers does NOT work for my Pascal, since day one it was introduced, I decided to ditch it only to get to this issue.

@Straafe
Copy link

Straafe commented Jan 19, 2023

Has this been fixed? Still getting black images with 768px 2.1, I can't use no half so looking for another way.

@DarkAlchy
Copy link

DarkAlchy commented Jan 20, 2023

Has this been fixed? Still getting black images with 768px 2.1, I can't use no half so looking for another way.

Either you use xformers or you use --no-half or fall back to 2.0. Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.

@CapsAdmin
Copy link

Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.

Seems a bit of an odd decisions given that xformers is nvidia only.

@DarkAlchy
Copy link

Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.

Seems a bit of an odd decisions given that xformers is nvidia only.

Hence the --no-half flag which I believe AMD can do. Personally, my hopes are that RDNA4 swings this around, so we no longer need Nvidia and its BS. CUDA is the only reason I stay with NVIDIA.

@catboxanon
Copy link
Collaborator

Closing as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests