You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and checked the recent builds/commits
What happened?
A known issue which I think was briefly fixed appears to be still be broken in 1.6 while using Loras and switching XL models. May have to do with the new model cache settings I've used:
Maximum number of checkpoints loaded at the same time = 2
Only keep one model on device = true
Checkpoints to cache in RAM (obsolete) = 0
VAE Checkpoints to cache in RAM = 0
Number of Lora networks to keep cached in memory = 0
Model cache size (requires restart) = 0
Steps to reproduce the problem
XL model loaded while using a lora, generate
Switch to another XL model, generate
Switch back to first model, gen with all same settings and see different result
May need to switch twice in step 2 before going back to first model to see the new result. May vary depending on the number of max checkpoints loaded settings
What should have happened?
Same result after switching models
Sysinfo
Linux Ubuntu 32GB RAM 6800XT
Version 1.6
What browsers do you use to access the UI ?
Mozilla Firefox
Console logs
No errors
Additional information
No response
The text was updated successfully, but these errors were encountered:
catboxanon
changed the title
[Bug]: Switching XL models back and forth leads to different images
[Bug]: Switching models back and forth leads to different images
Sep 5, 2023
I've seen someone mention this happens with SD1 models too, so I've retitled the issue accordingly. However, this isn't something I'm able to reproduce, but for a different reason, as I get this as part of step 3 described above when generating:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Either way, I'm going to mark this as a bug, because something seems to be going wrong regardless. My guess is it's related to the new model cache setting.
Also, @df2df -- please attach the sysinfo file (found under Settings -> Sysinfo in the webui). Normally this would have been closed but I'm giving this the benefit of the doubt since others and myself have this issue.
Is there an existing issue for this?
What happened?
A known issue which I think was briefly fixed appears to be still be broken in 1.6 while using Loras and switching XL models. May have to do with the new model cache settings I've used:
Maximum number of checkpoints loaded at the same time = 2
Only keep one model on device = true
Checkpoints to cache in RAM (obsolete) = 0
VAE Checkpoints to cache in RAM = 0
Number of Lora networks to keep cached in memory = 0
Model cache size (requires restart) = 0
Steps to reproduce the problem
What should have happened?
Same result after switching models
Sysinfo
Linux Ubuntu 32GB RAM 6800XT
Version 1.6
What browsers do you use to access the UI ?
Mozilla Firefox
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: