Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Stable diffusion model failed to load - cannot import name 'VQModelInterface' from 'ldm.models.autoencoder' #14085

Closed
1 task done
ssalka opened this issue Nov 24, 2023 · 1 comment
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@ssalka
Copy link

ssalka commented Nov 24, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

I started getting this error after upgrading a few extensions (control net, animate diff, dynamic prompts/wildcards, auto tls). I have been using the dev branch to get this fix and am on commit 5e80d9ee; I also tried on the the more recent commit 8aa51f68 and have done a full re-install by backing up my venv directory and letting the webui setup make a new one.

I checked in repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py, and did not see anything named VQModelInterface. Did something go wrong with my installation/setup?

Here is the relevant stack trace:

Loading VAE weights specified in settings: C:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying attention optimization: xformers... done.
loading stable diffusion model: ImportError
Traceback (most recent call last):
  File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable-diffusion-webui\modules\initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable-diffusion-webui\modules\shared_items.py", line 112, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable-diffusion-webui\modules\sd_models.py", line 522, in get_sd_model
    load_model()
  File "C:\stable-diffusion-webui\modules\sd_models.py", line 655, in load_model
    sd_hijack.model_hijack.hijack(sd_model)
  File "C:\stable-diffusion-webui\modules\sd_hijack.py", line 259, in hijack
    import modules.models.diffusion.ddpm_edit
  File "C:\stable-diffusion-webui\modules\models\diffusion\ddpm_edit.py", line 27, in <module>
    from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL
ImportError: cannot import name 'VQModelInterface' from 'ldm.models.autoencoder' (C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py)


Stable diffusion model failed to load

Steps to reproduce the problem

  1. git checkout 5e80d9ee (dev branch)
  2. Run webui-user.bat (using args --medvram --xformers)
  3. See error preventing SD model load, hopefully

What should have happened?

The SD model should load & the UI setup should finish

Sysinfo

sysinfo-2023-11-24-17-19.txt

What browsers do you use to access the UI ?

Brave

Console logs

C:\stable-diffusion-webui>webui-user.bat
venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-298-g5e80d9ee
Commit hash: 5e80d9ee99c5899e5e2b130408ffb65a0585a62a
Launching Web UI with arguments: --medvram --xformers
[-] ADetailer initialized. version: 23.11.1, num models: 9
2023-11-24 09:13:42,199 - ControlNet - INFO - ControlNet v1.1.419
ControlNet preprocessor location: C:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-11-24 09:13:42,378 - ControlNet - INFO - ControlNet v1.1.419
Default key/cert pair was already generated by webui
Certificate trust store ready
Loading weights [0729a1570d] from C:\stable-diffusion-webui\models\Stable-diffusion\stable-diffusion-1_5.safetensors
Creating model from config: C:\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL:  https://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 21.1s (prepare environment: 5.6s, import torch: 5.9s, import gradio: 1.6s, setup paths: 1.5s, initialize shared: 0.4s, other imports: 1.2s, setup codeformer: 0.2s, list SD models: 0.2s, load scripts: 3.3s, create ui: 0.8s, gradio launch: 0.5s).
Loading VAE weights specified in settings: C:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying attention optimization: xformers... done.
loading stable diffusion model: ImportError
Traceback (most recent call last):
  File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\Me\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable-diffusion-webui\modules\initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable-diffusion-webui\modules\shared_items.py", line 112, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable-diffusion-webui\modules\sd_models.py", line 522, in get_sd_model
    load_model()
  File "C:\stable-diffusion-webui\modules\sd_models.py", line 655, in load_model
    sd_hijack.model_hijack.hijack(sd_model)
  File "C:\stable-diffusion-webui\modules\sd_hijack.py", line 259, in hijack
    import modules.models.diffusion.ddpm_edit
  File "C:\stable-diffusion-webui\modules\models\diffusion\ddpm_edit.py", line 27, in <module>
    from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL
ImportError: cannot import name 'VQModelInterface' from 'ldm.models.autoencoder' (C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py)


Stable diffusion model failed to load

Additional information

Currently enabled extensions:

webui-extensions

@ssalka ssalka added the bug-report Report of a bug, yet to be confirmed label Nov 24, 2023
@ssalka
Copy link
Author

ssalka commented Nov 24, 2023

Solved my own problem, it was because the LDSR extension had somehow become disabled, I assume through some mechanism of the upgrade process though I don't understand why as I didn't target it for an upgrade and didn't turn it off manually. Found this out via this discussion

@ssalka ssalka closed this as completed Nov 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

1 participant