You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and checked the recent builds/commits
What happened?
Unable to create an image using Accelerate with OpenVino script and the Arc A770 GPU in Windows.
Error:
torch._dynamo.exc.BackendCompilerFailed: backend='openvino_fx' raised:
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 2, 77, 768] to have 4 channels, but got 2 channels instead
PS D:\Workspace\stable-diffusion-webui-openvino> .\webui-user.bat
venv "D:\Workspace\stable-diffusion-webui-openvino\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 44006297e03a07f28505d54d6ba5fd55e0c1292d
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from D:\Workspace\stable-diffusion-webui-openvino\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\Workspace\stable-diffusion-webui-openvino\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set`share=True`in`launch()`.Startup time: 5.4s (prepare environment: 0.2s, import torch: 2.3s, import gradio: 0.6s, setup paths: 0.6s, other imports: 0.4s, load scripts: 0.7s, create ui: 0.4s, gradio launch: 0.3s).vocab.json: 100%|█████████████████████████████████████████████| 961k/961k [00:00<00:00, 12.9MB/s]merges.txt: 100%|█████████████████████████████████████████████| 525k/525k [00:00<00:00, 7.95MB/s]special_tokens_map.json: 100%|███████████████████████████████████| 389/389 [00:00<00:00, 861kB/s]tokenizer_config.json: 100%|█████████████████████████████████████| 905/905 [00:00<00:00, 904kB/s]config.json: 100%|██████████████████████████████████████████████████| 4.52k/4.52k [00:00<?, ?B/s]Applying attention optimization: InvokeAI... done.Model loaded in 3.1s (load weights from disk: 0.4s, create model: 1.6s, apply weights to model: 0.9s).{}Loading weights [6ce0161689] from D:\Workspace\stable-diffusion-webui-openvino\models\Stable-diffusion\v1-5-pruned-emaonly.safetensorsOpenVINO Script: created model from config : D:\Workspace\stable-diffusion-webui-openvino\configs\v1-inference.yamlconfig.json: 100%|██████████████████████████████████████████████████| 4.55k/4.55k [00:00<?, ?B/s]pytorch_model.bin: 100%|████████████████████████████████████| 1.22G/1.22G [00:33<00:00, 36.6MB/s]preprocessor_config.json: 100%|█████████████████████████████████████████| 342/342 [00:00<?, ?B/s] 0%|| 0/20 [00:00<?, ?it/s]Exception from src\inference\src\core.cpp:116:[ GENERAL_ERROR ] Check 'false' failed at src\plugins\intel_gpu\src\plugin\program_builder.cpp:176:[GPU] ProgramBuilder build failed!Program build failed(0_part_8): You may enable OCL source dump to see the error log.[2024-03-06 12:01:07,741] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_13[2024-03-06 12:01:07,741] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_12[2024-03-06 12:01:07,741] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_11[2024-03-06 12:01:07,741] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_10[2024-03-06 12:01:07,741] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_9[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_8[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_7[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_6[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_5[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_4[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_3[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_2[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat_1[2024-03-06 12:01:07,743] [0/0] torch._inductor.fx_passes.split_cat: [WARNING] example value absent for node: cat 0%|| 0/20 [00:14<?, ?it/s]*** Error completing request*** Arguments: ('task(kys3j0ugdsbxvei)', 'beautiful girl', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000233E33347C0>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'GPU', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {} Traceback (most recent call last): File "D:\Workspace\stable-diffusion-webui-openvino\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\Workspace\stable-diffusion-webui-openvino\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\modules\txt2img.py", line 52, in txt2img processed = modules.scripts.scripts_txt2img.run(p, *args) File "D:\Workspace\stable-diffusion-webui-openvino\modules\scripts.py", line 601, in run processed = script.run(p, *script_args) File "D:\Workspace\stable-diffusion-webui-openvino\scripts\openvino_accelerate.py", line 1228, in run processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac) File "D:\Workspace\stable-diffusion-webui-openvino\scripts\openvino_accelerate.py", line 979, in process_images_openvino output = shared.sd_diffusers_model( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_contextreturn func(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 840, in __call__ noise_pred = self.unet( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 328, in _fnreturn fn(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 490, in catch_errorsreturn callback(frame, cache_entry, hooks, frame_state) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 641, in _convert_frame result = inner_convert(frame, cache_size, hooks, frame_state) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 133, in _fnreturn fn(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 389, in _convert_frame_assertreturn _compile( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 569, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\utils.py", line 189, in time_wrapper r = func(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 491, in compile_inner out_code = transform_code_object(code, transform) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1028, in transform_code_object transformations(instructions, code_options) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 458, in transformtracer.run() File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2074, in runsuper().run() File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 724, in run and self.step() File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 688, in step getattr(self, inst.opname)(inst) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2162, in RETURN_VALUE self.output.compile_subgraph( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 857, in compile_subgraph self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) File "D:\Programs\miniconda3\envs\sd\lib\contextlib.py", line 79, in innerreturn func(*args, **kwds) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 957, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\utils.py", line 189, in time_wrapper r = func(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1024, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1009, in call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 117, in debug_wrapper compiled_gm = compiler_fn(gm, example_inputs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\__init__.py", line 1607, in __call__return self.compiler_fn(model_, inputs_, **self.kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 95, in wrapperreturn fn(model, inputs, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\scripts\openvino_accelerate.py", line 233, in openvino_fxreturn compile_fx(subgraph, example_inputs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1150, in compile_fxreturn aot_autograd( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 55, in compiler_fn cg = aot_module_simplified(gm, example_inputs, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 3891, in aot_module_simplified compiled_fn = create_aot_dispatcher_function( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_dynamo\utils.py", line 189, in time_wrapper r = func(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 3379, in create_aot_dispatcher_function fw_metadata = run_functionalized_fw_and_collect_metadata( File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 757, in inner flat_f_outs = f(*flat_f_args) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 3496, in functional_call out = Interpreter(mod).run(*args[params_len:], **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\fx\interpreter.py", line 138, in run self.env[node] = self.run_node(node) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\fx\interpreter.py", line 195, in run_nodereturn getattr(self, n.op)(n.target, args, kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\fx\interpreter.py", line 312, in call_modulereturn submod(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_implreturn self._call_impl(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_implreturn forward_call(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forwardreturn originals.Conv2d_forward(self, input) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forwardreturn self._conv_forward(input, self.weight, self.bias) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forwardreturn F.conv2d(input, weight, bias, self.stride, File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\utils\_stats.py", line 20, in wrapperreturn fn(*args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1250, in __torch_dispatch__return self.dispatch(func, types, args, kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1487, in dispatch op_impl_out = op_impl(self, func, *args, **kwargs) File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\torch\_subclasses\fake_tensor.py", line 677, in conv conv_backend = torch._C._select_conv_backend(**kwargs) torch._dynamo.exc.BackendCompilerFailed: backend='openvino_fx' raised: RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 2, 77, 768] to have 4 channels, but got 2 channels instead While executing %l__self___conv_in : [num_users=3] = call_module[target=L__self___conv_in](args = (%l_sample_,), kwargs = {}) Original traceback: File "D:\Workspace\stable-diffusion-webui-openvino\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1026, in forward sample = self.conv_in(sample) Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True---
Additional information
No response
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
What happened?
Unable to create an image using Accelerate with OpenVino script and the Arc A770 GPU in Windows.
Error:
torch._dynamo.exc.BackendCompilerFailed: backend='openvino_fx' raised:
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 2, 77, 768] to have 4 channels, but got 2 channels instead
Steps to reproduce the problem
What should have happened?
Image should be able to generate using GPU
Sysinfo
sysinfo-2024-03-06-18-02.txt
What browsers do you use to access the UI ?
Microsoft Edge
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: