-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Previous model affect current model image generation even with same seed #14200
Comments
Likely same issue being experienced in #12937. |
I've seen this before with the pruned protogen models. Using the unpruned version fixed it. |
that is not available for my model and i hope that a actual fix is implemented rather than a band aid solution |
unless this is actually another issue this is likely triggered when switching form XL -> XL model in the screenshot about the last eight digit of the filename is the image hash, basically if those digits are different than the image is different notice that the image in the pinx after switching to and back from a XL model is different from the previous generation and switching from XL to 1.5 model and back actually resets it back to the original state also earlier today AUTO seems to have found a possible fix |
This seems similar to this issue as well #13516 |
This issue persists even without using lora, when generating img2img. Deleting the repo and cloning for some reason fixes this problem for a while. |
Having the issue here also. Not using LoRAs but Textual Inversions (SDXL embeddings.safetensors) |
Is there an existing issue for this?
What happened?
This problem is generated from switching models to generate same image using same lora. i tried generating the same image using same seed, lora and model. After switching between old and new model and i get a significantly different result, as if effects of previous model are bleeding into image of new model, which should not happen as models are separate from each other
Steps to reproduce the problem
What should have happened?
Models are separate from each other and should not affect image generations of another model when switching between models
Sysinfo
{
"Platform": "Windows-10-10.0.19045-SP0",
"Python": "3.10.8",
"Version": "v1.7.0-RC-2-g883d6a2b",
"Commit": "883d6a2b34a2817304d23c2481a6f9fc56687a53",
"Script path": "D:\sd\stable-diffusion-webui",
"Data path": "D:\sd\stable-diffusion-webui",
"Extensions dir": "D:\sd\stable-diffusion-webui\extensions",
"Checksum": "d30082a1b5d297d57317fae3463502cc4fd72cb555d82e6b244c6cc999d70e10",
"Commandline": [
"launch.py",
"--xformers",
"--opt-split-attention",
"--no-half-vae",
"--upcast-sampling",
"--no-gradio-queue"
],
"Torch env info": {
"torch_version": "1.13.1+cu117",
"is_debug_build": "False",
"cuda_compiled_version": "11.7",
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Microsoft Windows 10 家用版",
"libc_version": "N/A",
"python_version": "3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.19045-SP0",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "536.23",
"nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3080 Ti",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.23.5",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==1.13.1+cu117",
"torchdiffeq==0.2.3",
"torchmetrics==0.11.4",
"torchsde==0.2.6",
"torchvision==0.14.1+cu117"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "garbage_collection_threshold:0.9,max_split_size_mb:512",
"is_xnnpack_available": "True"
What browsers do you use to access the UI ?
Microsoft Edge
Console logs
Additional information
I have already used fixes from silmilar issues like #13917 and #13178 switched to dev branch and the problem still persists, i also tried to delete the problematic model and lora and redownloaded the lora again and it seems to still effect the generations of the new model.
No response
The text was updated successfully, but these errors were encountered: