Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Switching models several times leads to unique generations #12872

Closed
1 task done
df2df opened this issue Aug 30, 2023 · 1 comment
Closed
1 task done

[Bug]: Switching models several times leads to unique generations #12872

df2df opened this issue Aug 30, 2023 · 1 comment
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@df2df
Copy link

df2df commented Aug 30, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

After switching models several times the results generated are different than normal circumstances. I have only seen this happen with XL models. As far as I can see the new result becomes influenced by the model I just switched from. It's kind of like there is a phantom merge happening in memory even after unloading the model back to the first one.

Edit: After doing a quick test with an add difference merge between the 2 models, the unique result that is generated is very similar to the result of the add difference merge.

I've also seen situations where using an XL Lora and then even after removing it from the prompt, the lora still affects further generations.

Steps to reproduce the problem

  1. Use XL model, generate result
  2. Switch to another XL model, generate result
  3. Switch back to the first XL model, generate result and see a different result than what was made in step 1, even if all settings were the same.

What should have happened?

Generated images should stay consistent after switching models multiple times

Version or Commit where the problem happens

v1.5.0

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Linux

What device are you running WebUI on?

AMD GPUs (RX 6000 above)

Cross attention optimization

Doggettx

What browsers do you use to access the UI ?

No response

Command Line Arguments

--ckpt-dir /home/user/SD/models --upcast-sampling --no-half-vae --opt-split-attention --autolaunch

List of extensions

sd-webui-lora-block-weight
sd-webui-additional-networks

Console logs

No errors

Additional information

Caching models in settings = 0
Keep models in VRAM = false
Cache VAE = false

@df2df df2df added the bug-report Report of a bug, yet to be confirmed label Aug 30, 2023
@w-e-w
Copy link
Collaborator

w-e-w commented Aug 31, 2023

#12619

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

2 participants