-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Reusing model to load the next leads to different image with same seed (Lora functionality related?) #13516
Comments
Got rid of
on sd_models.py, so it doesn't reuse the model anymore. The problem no longer exists but it introduces a new issue when trying to load a model that had been loaded before within the same instance. For anyone willing to try it out, this is what I changed (it was checking for the limit of checkpoints allowed in ram before, so I skipped it and forced it to compare to 0.
--Edit 3:
By removing this ^in sd_models.py, in No longer will it try to move stuff around, leading to bad seeds and bad gens. I know this was coded for the sake of optimizing the loader, but it's brought me nothing but issues. I also replicated the bug on a new installation of Win10 with an older nvidia driver. I'll keep an eye out for errors, but I think this is it and the loader should no longer bother anyone anymore, or at least, not me. |
I personally just encountered this issue now (been using A1111 every day), but I've been 'already up to date' for weeks when I git pull every day. Which is interesting because this issue was just opened 16 hours ago. |
I may have a similar bug too, #13473 |
I believe this was introduced with #12227, around two months ago. People will only notice if they tried replicating generations, otherwise, they will never. There are also models that don't affect the outcome that much, so chances are you used a model that didn't change your gens in a way you'd notice. I believe Orangemix2 -> AnythingV5 had a smaller impact of around 10% on the generated image, while Based66 -> AnythingV5 can change the gen entirely. So I guess it's about models that differ too much from each other, mainly. Still, webui shouldn't do a soft merge of these. By the way, despite the loader being optimized to go from one model to another, I've found that disabling the transfer of any data between the two, will quicken the loading of models to the point that I now wait about 2s compared to 5s, without any secondary effects (so far). I fear however this is plaguing recent previews of models on Civitai, because people are unaware this is corrupting their gens and therefore uploading imagery that won't be easily replicated unless they launched webui with that model specifically. |
For me it happens even when I don't change models and the only thing that happened is quit a1111 and restarted it (maybe in a new console/venv activation, not sure, cuz sometimes it doesn't change) but can also change if I don't restart it at all unless I'm going sc hizo
Yeah I always save good gens to run hires fix on later, and it's been really annoying not being able to replicate the same result |
Switching lora/lycoris weights used to corrupt them until I changed models. It happens less often nowadays, but it happens sometimes. Maybe that's related to your issue. Either way, I've also seen these changes randomly. |
Dunno about switching. I know that when I used xyz with the addnet extension, when it changed models they kept bleeding into each other so that might be related, but that doesn't happen with the built in extra networks at least. I also don't switch around the lora in the prompt when this happens, the prompt stays identical, the only thing that changes is a1111 being restarted |
How long does it take you to reproduce this? |
I'll give it a go in a bit, I'm not at my pc now. I'll try just restarting and also restarting venv, and also changing models and see if there is a difference in what happens between each of these and I'll edit this msg |
An update on this. Gens remain the same after editing out those bits of code. Seems good to go, although not the best fix. You can look at my fork and grab the sd_models.py if you want to try it yourself. Edit: |
@yoyoinneverland are you saying that your commit is still a valid fix to get the same result when doing high res fix? master...yoyoinneverland:stable-diffusion-webui-nomod:master |
When using your average model fetched from civitai, yeah. |
I noticed a few weeks ago while using the additional networks extension with loras for xyz between models, that all the images looked like the first model bled into the other 10 models next to it. You're saying that's related to what's happening here? By the way, I tested the modified sd_models.py, but I still wasn't able to replicate stuff I saved while doing xyzs from a few days ago. I'll try to check stuff where I wasn't using a lora and see if those results changed |
Well, it seems to be triggered by using Lora/Lycoris/Locon. I'll be running more tests in the meantime. -----EDIT 1I had an idea and played with the weights of a gen I couldn't reproduce anymore. Lo and behold, decreasing the weights by -0.001 to 0.02 took me closer to the original gen. It does seem that the weights increase with corrupted gens. If any of you still have access to the loras and the prompt, do try decreasing the strength of some of the loras by 0.001 to 0.02. |
So given the nature of the issue, any images you generated after switching models should be irreproducible on the fixed version, since in theory fixing the issue would cause those images to be different. You may want to try reproducing images that you made on v1.5.x instead, and that might be a better way to test. |
Some gens can be reproduced to some extent if you still remember the order you swapped models around. It helps to check out the outputs right before the gen you wish to reproduce and looking at the date and time. Not always 100% the same but it does come close. So in this case it might be best to use the unedited file so you can trigger the bug on purpose. |
I just met the same problem can It be fixed by simply force unload model like this pull? |
Yes, but it can introduce problems with niche and obscure models. I haven't tried merging models with it, so it's recommended you use it only for loading and generating with sd1.5 models. If needed, I'll look into the XL loader too. Both work. I'd like to add that someone issued a fix for an unrelated issue in the dev branch, so it might be best you use the alternative fix for the time being and then update into that, when it's available. |
If you want to actually fix it, do you need to fix lora or remove "reusing model" function? |
Well, you'd have to bring back the old way models were loaded two months ago. I think that will fix it for sure. |
I think the modified sd_models made it so that when I switch models it doesn't get unloaded from vram |
I'm not sure how that modified sd_models.py keep model from unload my change is a little different to above |
It does get unloaded (at least over here), it outright empties the
container. I will test it in a bit.
…On Sun, Oct 15, 2023 at 3:55 PM imi ***@***.***> wrote:
I think the modified sd_models made it so that when I switch models it
doesn't get unloaded from vram
—
Reply to this email directly, view it on GitHub
<#13516 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7ERKVGRQJEYS7LTSWAMQVDX7RSXJANCNFSM6AAAAAA5U4ALDY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
confirming with evidence. when attempting to recreate this generation here to play with highresfix etc on it. identical parameters & lora weight etc. produced the next image. (the hands) :( however dropping my lora weight from (1.0) to (0.999) rectifies it :) i have had wild fluctuations in image reproduction over the last few months. this micro lowering of the lora weights did not help (nor ANYthing else) the most recent Nvidia driver update (537.58), seemed to eliminate this for me until i noticed this tonight. hope this can shed light for one of you and help to remove this toxic bug <3 :) |
Hello Psykhosisis. You mentioned the last driver eliminated this issue for you? I like your gen, by the way. |
it appears to have reduced it significantly at the very least, i am able to replicate results when i have needed to:) |
If someone runs a |
This comment was marked as off-topic.
This comment was marked as off-topic.
Eh. Updated to dev since I read some things to fixed, but I still get different gens going from yesterday into today. |
still having this issue as of now, someone can look into this again? |
Is there an existing issue for this?
What happened?
Same seed, same VENV, same device, same driver generating two different images depending on which models is loaded first due to reusing loaded models to load the next. Tends to happen only with the first model loaded. Going from Model B to Model A to Model C and back to Model A will have the same result as going from B to A only. Although this bit is replicated 95% of the time, there are some situations in which actually hopping between many models has an effect, but it's hard to pinpoint when.
Steps to reproduce the problem
For this example, we'll have Model A and Model B, different models.
What should have happened?
Generation shouldn't differ. Even when switching and loading x or y model first, generation should stay the same.
Sysinfo
sysinfo-2023-10-05-21-03.txt
What browsers do you use to access the UI ?
Mozilla Firefox
Console logs
Additional information
Can be fixed by increasing the amount of models that can be loaded at the same time, although not optimal and only so many can be loaded into the RAM until you run out... so as long you avoid reusing models to load another, you should be fine, but alas...
(This was also replicated in the latest build of A1111)
The text was updated successfully, but these errors were encountered: