Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Previous model affect current model image generation even with same seed #14200

Open
1 task done
miguel234457 opened this issue Dec 4, 2023 · 7 comments
Open
1 task done
Labels
bug Report of a confirmed bug

Comments

@miguel234457
Copy link

miguel234457 commented Dec 4, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

This problem is generated from switching models to generate same image using same lora. i tried generating the same image using same seed, lora and model. After switching between old and new model and i get a significantly different result, as if effects of previous model are bleeding into image of new model, which should not happen as models are separate from each other

Steps to reproduce the problem

  1. generate image with lora using new model
  2. switch to bad model
  3. generate image with lora using bad model
  4. switch back to new model
  5. generate image with lora using new model with same seed
  6. get significantly different generation with same model and seed

What should have happened?

Models are separate from each other and should not affect image generations of another model when switching between models

Sysinfo

{
"Platform": "Windows-10-10.0.19045-SP0",
"Python": "3.10.8",
"Version": "v1.7.0-RC-2-g883d6a2b",
"Commit": "883d6a2b34a2817304d23c2481a6f9fc56687a53",
"Script path": "D:\sd\stable-diffusion-webui",
"Data path": "D:\sd\stable-diffusion-webui",
"Extensions dir": "D:\sd\stable-diffusion-webui\extensions",
"Checksum": "d30082a1b5d297d57317fae3463502cc4fd72cb555d82e6b244c6cc999d70e10",
"Commandline": [
"launch.py",
"--xformers",
"--opt-split-attention",
"--no-half-vae",
"--upcast-sampling",
"--no-gradio-queue"
],
"Torch env info": {
"torch_version": "1.13.1+cu117",
"is_debug_build": "False",
"cuda_compiled_version": "11.7",
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Microsoft Windows 10 家用版",
"libc_version": "N/A",
"python_version": "3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.19045-SP0",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "536.23",
"nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3080 Ti",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.23.5",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==1.13.1+cu117",
"torchdiffeq==0.2.3",
"torchmetrics==0.11.4",
"torchsde==0.2.6",
"torchvision==0.14.1+cu117"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "garbage_collection_threshold:0.9,max_split_size_mb:512",
"is_xnnpack_available": "True"

What browsers do you use to access the UI ?

Microsoft Edge

Console logs

No logs, everything runs normally

Additional information

I have already used fixes from silmilar issues like #13917 and #13178 switched to dev branch and the problem still persists, i also tried to delete the problematic model and lora and redownloaded the lora again and it seems to still effect the generations of the new model.

No response

@miguel234457 miguel234457 added the bug-report Report of a bug, yet to be confirmed label Dec 4, 2023
@w-e-w w-e-w added the bug Report of a confirmed bug label Dec 4, 2023
@catboxanon catboxanon removed the bug-report Report of a bug, yet to be confirmed label Dec 4, 2023
@catboxanon
Copy link
Collaborator

Likely same issue being experienced in #12937.

@missionfloyd
Copy link
Collaborator

I've seen this before with the pruned protogen models. Using the unpruned version fixed it.

@miguel234457
Copy link
Author

miguel234457 commented Dec 5, 2023

I've seen this before with the pruned protogen models. Using the unpruned version fixed it.

that is not available for my model and i hope that a actual fix is implemented rather than a band aid solution

@w-e-w
Copy link
Collaborator

w-e-w commented Dec 5, 2023

unless this is actually another issue this is likely triggered when switching form XL -> XL model
I don't believe this is related to lora

2023-12-05 00_13_21_248 explorer

in the screenshot about the last eight digit of the filename is the image hash, basically if those digits are different than the image is different

notice that the image in the pinx after switching to and back from a XL model is different from the previous generation

and switching from XL to 1.5 model and back actually resets it back to the original state

also earlier today AUTO seems to have found a possible fix

@jmblsmit
Copy link

jmblsmit commented Dec 6, 2023

This seems similar to this issue as well #13516

@Wladastic
Copy link

This issue persists even without using lora, when generating img2img.
When I start sdui it loads sd_XL_1.0 as an example but it keeps getting """NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
"""
Errors until you generate an image in txt2img, when you switch back to img2img, it works all of a sudden.
When switching models, same issue again until I generate a random image from txt2img.

Deleting the repo and cloning for some reason fixes this problem for a while.
Maybe some caching errors? I have not read the code further, as I don't have the time.

@Ren4issance
Copy link

Having the issue here also. Not using LoRAs but Textual Inversions (SDXL embeddings.safetensors)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Report of a confirmed bug
Projects
None yet
Development

No branches or pull requests

7 participants