Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same #543

Open
1 of 6 tasks
morpheuskibbe opened this issue Oct 8, 2024 · 21 comments

Comments

@morpheuskibbe
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

I followed the install instructions and it runs, but any attempt to produce anything gives the title error

Steps to reproduce the problem

  1. install
  2. wait for it to load a model
  3. do any prompt

What should have happened?

generate the image

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-10-08-20-52.json

Console logs

venv "C:\Stable 2\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Version: v1.10.1-amd-10-g2872b02d
Commit hash: 2872b02d3b935665c1b52a32e8bc53b07ec5d540
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --skip-torch-cuda-test
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
ONNX: version=1.19.2 provider=CUDAExecutionProvider, available=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Loading weights [c262d30f65] from C:\Stable 2\stable-diffusion-webui-directml\models\Stable-diffusion\steinrealism_b5.safetensors
Running on local URL:  http://127.0.0.1:7860
Creating model from config: C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\configs\inference\sd_xl_base.yaml

To create a public link, set `share=True` in `launch()`.
C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Startup time: 7.5s (prepare environment: 10.9s, initialize shared: 1.0s, load scripts: 0.3s, create ui: 0.2s, gradio launch: 0.4s).
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 402, in cached_file
    resolved_file = hf_hub_download(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_deprecation.py", line 101, in inner_f
    return f(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1232, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1339, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1854, in _raise_on_head_call_error
    raise head_call_error
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1746, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1666, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 364, in _request_wrapper
    response = _request_wrapper(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 388, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-670599ce-7ec9076a4c4b932440fadc44;6e899fec-7345-4ed2-9d64-fc59e4c30840)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\morph\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\morph\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\morph\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
    self.conditioner = instantiate_from_config(
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
    embedder = instantiate_from_config(embconfig)
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_utils.py", line 3247, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 425, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Applying attention optimization: InvokeAI... done.
Model loaded in 52.1s (load weights from disk: 0.4s, create model: 6.3s, apply weights to model: 5.4s, apply half(): 0.2s, calculate empty prompt: 39.6s).
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
Reusing loaded model steinrealism_b5.safetensors [c262d30f65] to load yiffymix_V33.safetensors [9c58e47317]
Loading weights [9c58e47317] from C:\Stable 2\stable-diffusion-webui-directml\models\Stable-diffusion\yiffymix_V33.safetensors
*** Error completing request
*** Arguments: ('task(4l78ho2231si4nr)', <gradio.routes.Request object at 0x00000207F4694BB0>, 'fox', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 1083, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 1441, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_models_xl.py", line 43, in apply_model
        return self.model(x, t, cond)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward
        h = module(h, emb, context)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 98, in forward
        x = layer(x, emb)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 317, in forward
        return checkpoint(
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint
        return func(*inputs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 329, in _forward
        h = self.in_layers(x)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
        input = module(input)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 275, in forward
        return super().forward(x.float()).type(x.dtype)
      File "C:\Stable 2\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 614, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 287, in forward
        return F.group_norm(
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2588, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float

---
Creating model from config: C:\Stable 2\stable-diffusion-webui-directml\configs\v1-inference.yaml
creating model quickly: OSError
Traceback (most recent call last):
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 402, in cached_file
    resolved_file = hf_hub_download(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_deprecation.py", line 101, in inner_f
    return f(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1232, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1339, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1854, in _raise_on_head_call_error
    raise head_call_error
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1746, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1666, in get_hf_file_metadata
    r = _request_wrapper(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 364, in _request_wrapper
    response = _request_wrapper(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 388, in _request_wrapper
    hf_raise_for_status(response)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
    raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-67059a8d-11e205db16cd535c14442630;a57df5e9-487e-4d1e-b300-692a3fafbfa6)

Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\morph\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\morph\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\ui_settings.py", line 316, in <lambda>
    fn=lambda value, k=k: self.run_settings_single(value, key=k),
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\ui_settings.py", line 95, in run_settings_single
    if value is None or not opts.set(key, value):
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\options.py", line 165, in set
    option.onchange()
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 14, in f
    res = func(*args, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_models.py", line 992, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_models.py", line 831, in load_model
    sd_model = instantiate_from_config(sd_config.model, state_dict)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_models.py", line 775, in instantiate_from_config
    return constructor(**params)
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
    self.transformer = CLIPTextModel.from_pretrained(version)
  File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
    res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_utils.py", line 3247, in from_pretrained
    resolved_config_file = cached_file(
  File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 425, in cached_file
    raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Failed to create model quickly; will retry using slow method.
Loading VAE weights found near the checkpoint: C:\Stable 2\stable-diffusion-webui-directml\models\Stable-diffusion\yiffymix_V33.vae.pt
Applying attention optimization: InvokeAI... done.
Model loaded in 8.2s (create model: 0.8s, apply weights to model: 1.8s, load VAE: 0.3s, calculate empty prompt: 5.2s).
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(u8wr8c7xjp18an1)', <gradio.routes.Request object at 0x00000207F48538E0>, 'fox', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 1083, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 1441, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 36, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
        x = layer(x)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same

---
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(f1sfsalvb07839z)', <gradio.routes.Request object at 0x00000207F45009A0>, 'fox', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 849, in process_images
        res = process_images_inner(p)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 1083, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\processing.py", line 1441, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 233, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 249, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 36, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
        x = layer(x)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Stable 2\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "C:\Stable 2\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Input type (float) and bias type (struct c10::Half) should be the same

---

Additional information

No response

@CS1o
Copy link

CS1o commented Oct 12, 2024

If you really want to use the CPU Mode of the Webui you need to additional add:
--no-half --precision full to the webui-user.bat commandline_args.

@morpheuskibbe
Copy link
Author

If you really want to use the CPU Mode of the Webui you need to additional add: --no-half --precision full to the webui-user.bat commandline_args.

I really DON'T want to use the CPU mode, but I it's just doing it. Is it a setting I can't find?

@CS1o
Copy link

CS1o commented Oct 12, 2024

I really DON'T want to use the CPU mode, but I it's just doing it. Is it a setting I can't find?

Whats your GPU then?
You can delete everything and then start with my [AMD] Automatic1111 with ZLUDA Guide from here;
https://github.com/CS1o/Stable-Diffusion-Info/wiki/Webui-Installation-Guides

@morpheuskibbe
Copy link
Author

6800XT

It USED to work. but I attempted to update and now it doesn't and I was dumb enough to try to update the original install instead of doing a new folder so i can't just go back.

Though to be clear I did make a new folder once that failed so it should be fresh.

@CS1o
Copy link

CS1o commented Oct 12, 2024

6800XT

Then best is to follow my install guide and then you can move the models over.
Then you shouldnt have any problems.
And never add --skip-torch-cuda-test to the webui-user.bat

@morpheuskibbe
Copy link
Author

Thanks. it is running now. though... now im having a totally different problem, btu at least its running

new problem: https://www.youtube.com/watch?v=qOJWORCgl3M specifically the last 10 seconds. It looks like its working but the output is fucked. though thats likely a model thing i assume.

@CS1o
Copy link

CS1o commented Oct 12, 2024

Would need to know your txt2img settings and which model you used

@morpheuskibbe
Copy link
Author

parameters

human, female, slim, muscular, green eyes,

relax, relaxing, throne room, gold chair, regal, queen, couch, recline, detailed background,

Steps: 80, Sampler: Euler a, Schedule type: Automatic, CFG scale: 7, Seed: 825779648, Size: 512x512, Model hash: c262d30f65, Model: steinrealism_b5, Version: v1.10.1-amd-11-gefddd05e

Model: https://civitai.com/models/793771/steinrealism

I'm guessing its the 'VAE' thing it mentions? maybe? I tried doing the thing where you rename the VAE like the original except .vae.pt but then it wouldn't run at all. so thats a thing.

@morpheuskibbe
Copy link
Author

I Just tried two other models and am getting the same behavior of the preview looking fine till the very last step when things go insane

@CS1o
Copy link

CS1o commented Oct 12, 2024

try without a vae and also Pony models are trained on 1024x1024. so try that.
Also you can try delete the config.json and the ui-config.json and relaunch.

If you still get the error, please provide a full cmd log

@morpheuskibbe
Copy link
Author

Switching from "Euler a" to "DPM++ 2M" seems to be preventing this issue. no clue why that would be.

DDIM CFG++ also works

LMS fails entirely as does DPM adaptive

It seems to be related to the sampling method somehow.

@CS1o
Copy link

CS1o commented Oct 12, 2024

Hmm Pony models should work with Euler a normaly. But you can also try a 1.5 or sdxl model and check if it happens there too.

@morpheuskibbe
Copy link
Author

One of the others I tried lists its 'base' as SD 1.5 and it is having an identical issue. Running euler or euler a seems to be going fine till the last second and then goes borked

@CS1o
Copy link

CS1o commented Oct 12, 2024

Very strange. What you can try is to update your Python to 3.10.11 64bit and then delete the venv folder.

@morpheuskibbe
Copy link
Author

thats actually how i got it working to this point. i followed your link, updated python to 10.11 and did a fresh git bash from the link you provided earlier.

I followed the "[AMD] Automatic1111 with DirectML" section

would any of the commands it lists be potentially causing things? I put all the ones it said to use in the command args.

so these "--use-directml --medvram --opt-sub-quad-attention --opt-split-attention --no-half-vae --upcast-sampling"

@CS1o
Copy link

CS1o commented Oct 12, 2024

Oh you used Directml before too?
And nope the commands shouldn't cause such issues.
But with your GPU you should switch to Zluda as it has a much better performance and is more compatible and less bugged.

@morpheuskibbe
Copy link
Author

I gave it a go. the ZLUDA one fails to start
Cwindowssystem32cmd.exe.txt

@CS1o
Copy link

CS1o commented Oct 12, 2024

This is a hip bug as it found your CPUs iGPU (gfx1036) before your dedicated GPU.

To fix this you have two options:
Open up the Device Manager and under Display Adapters disable the Radeon TM Graphics.

Or you can add: set HIP_VISIBLE_DEVICES=1
to the webui-user.bat

Then relaunch the webui-user.bat

@morpheuskibbe
Copy link
Author

morpheuskibbe commented Oct 12, 2024

I tried the set HIP_VISIBLE_DEVICES=1 and got this

return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

Then i tried the disable radeon option (and removed the set command) and it starts, but the performance seems to be utter fucking garbo

Failed to create model quickly; will retry using slow method.
Applying attention optimization: Doggettx... done.
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Model loaded in 276.7s (load weights from disk: 0.3s, create model: 0.9s, apply weights to model: 4.2s, apply half(): 0.1s, load textual inversion embeddings: 0.1s, calculate empty prompt: 270.9s).
Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...
0%| | 0/20 [00:00<?, ?it/s]Compiling in progress. Please wait...
Compiling in progress. Please wait...
Compiling in progress. Please wait...

I gave up after awhile. it was just in a permanent 'compiling' loop

@CS1o
Copy link

CS1o commented Oct 13, 2024

The first run will take 15-40 minutes. Its a one time only compiling thing.
So let it load and after 40 minutes relaunch the webui-user.bat and try generate an image again.
Then it should take seconds.

@morpheuskibbe
Copy link
Author

ITS WORKING. and the weird thing with the Euler isn't happening with zulda

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants