Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugin fails to run with ComfyUI #513

Open
depeschzeu opened this issue Sep 20, 2024 · 8 comments
Open

Plugin fails to run with ComfyUI #513

depeschzeu opened this issue Sep 20, 2024 · 8 comments

Comments

@depeschzeu
Copy link

depeschzeu commented Sep 20, 2024

Installed everything using instructions
ComfyUI seems to work fine via browser, but it fails to work with Photoshop plugin
Once I click 'generate' in plugin with txt2img for example, I get the following error in console window where ComfyUI is running in background:


got prompt
Failed to validate prompt for output 9:

  • VAELoader 57:
    • Required input is missing: vae_name
      Output will be ignored
      invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}

I can't find a way to make it work and I fail to understand what's wrong.
Running Photoshop 2024

@cdmusic2019
Copy link

The way I do it is: find 'extra_model_paths.yaml' in the 'comfyui' directory, set up a VAE directory in there specifically for the plugin, and put only one VAE file in there that corresponds to the model being used.

@depeschzeu
Copy link
Author

depeschzeu commented Sep 21, 2024

I've read an article about this plugin and that generative ai can be run locally on pc to use it with photoshop, and I am a newbie.
It is firtst time I try this technology so there are so many things I see for the first time and I found not much information how to customize it and troubleshoot. Just because it doesn't want to run right out of the box. I've tested a few methods I found on the Internet, I also can't find indepth wiki with guides or something like that. I know it's a hard task to maintain documentation etc, but anyways, so little is explained. It is made like 'by those who knows for those who knows'. And I guess it's so, because it's born in coders community. But I'm not a coder...

Will you be please a bit more specific how do I make this fix the right way. What paths it can be and what strings should be inserted and where. Where do I get VAE files etc?
I've found 'put_vae_here' inside of '/comfyui//models/vae'. This directory is empty except for 'put here' dummy file.
I've also found 'README_VERY_IMPORTANT.txt' with brief information that 'extra_model_paths.yaml.example' should be renamed and edited in case it is required to share models between comfy and other ui's. I guess it describes this situation. I've opened this file to see contents and it isn't quite clear what exacltly should be done to fix the issue without random experimentation.

I'm sorry to bother you, but it is completely new world for me, I really need some more explaination. I wish to learn but it is so hard with atomized information hidden here and there in small ammounts so I can't figure this out for myself.

@cdmusic2019
Copy link

I am sharing the model of A111, you can refer to my setup, hope it can help you.
01
02
03
04
05

@cdmusic2019
Copy link

If you are not sharing A1111 models, you should just put the VAE under 'comfyui\models\vae', not and go set up the 'yaml' file. the VAE can be downloaded at 'https://civitai.com/' to download it.

@depeschzeu
Copy link
Author

depeschzeu commented Sep 21, 2024

I failed to make A111 run. I am using quite an old generation amd rx gpu (rx580) with 8gb vram. I believe I can't use sdxl if I understood correctly what this means. I think it is another model of stable diffusion made for late and powerful gpu's provided with big vram. I've read some place it's not possible to run on lowspec gpu.

I've managed to run comfyui only with --directml command line with some additional keys recommended in tutorial I've read.

Sadly I cannot run auto111, because it returns multiple errors and curses me for having no cuda even with --directml commandline. I believe --directml is universal from what I've read but something goes wrong.
Also python installation scripts at first run by webui.bat fail to get some models from Internet I believe, because there are multiple error strings telling something like finding no repository following URL etc.

At least I've managed to install and run comfyui. In order to get somthing from a111 I guess I need to have full installation and clean first run without serious errors. Probably it misses some files.
In order to run it I've got this file 'PrunedEmaonly_v15' I've downloaded and put it into '/models/checkpoints' directory. Link was given in instructions to make comfy work with amd gpu. I believe it's some prepared model of sd ver 1.5, so I think.

Why it asks for VAE running from plugin, but it doesn't ask for it in browser interface?
So many things I do not know yet.
I've read about VAE that it helps to translate image to number space for ai and vice versa. I also found out that VAE often is included inside within model file. I have no detailed information about model used for my comfyUI. But I suspect it must have VAE inside, since comfy doesn't ask for any, and no separate VAE files are present.
I've read in article about VAE and it explains that SD is not going to work without VAE at all. This means that VAE must be inside of downloaded model, obviously.
So why it asks for VAE files when I use photoshop plugin if it's inside of model file already?

@depeschzeu
Copy link
Author

depeschzeu commented Sep 21, 2024

Well, I've got VAE files and also I've downloaded some other models from civitai.com.
Tried something in PS and it seems to start but I get now this:

Loading 1 new model
loaded partially 64.0 63.99951171875 0
!!! Exception during processing !!! Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 64, 64] to have 16 channels, but got 4 channels instead
Traceback (most recent call last):
  File "C:\stabdif\comfyui\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "C:\stabdif\comfyui\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "C:\stabdif\comfyui\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\stabdif\comfyui\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "C:\stabdif\comfyui\nodes.py", line 284, in decode
    return (vae.decode(samples["samples"]), )
  File "C:\stabdif\comfyui\comfy\sd.py", line 328, in decode
    pixel_samples[x:x+batch_number] = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
  File "C:\stabdif\comfyui\comfy\ldm\models\autoencoder.py", line 137, in decode
    x = self.decoder(z, **kwargs)
  File "C:\stabdif\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\stabdif\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stabdif\comfyui\comfy\ldm\modules\diffusionmodules\model.py", line 625, in forward
    h = self.conv_in(z)
  File "C:\stabdif\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\stabdif\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stabdif\comfyui\comfy\ops.py", line 106, in forward
    return super().forward(*args, **kwargs)
  File "C:\stabdif\venv\lib\site-packages\torch\nn\modules\conv.py", line 458, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\stabdif\venv\lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 64, 64] to have 16 channels, but got 4 channels instead

same model work in a browser and images generate without error.
It has something to do with api or settings or something else.

@cdmusic2019
Copy link

This plugin supports comfyui, but can't select VAE files, which is a shame. The author may have given up on upgrading it. So I've added a separate folder to store the VAEs used by PS, if not, I get an error when running. For 1.5 models you can use this VAE: https://civitai.com/models/276082/vae-ft-mse-840000-ema-pruned-or-840000-or-840k-sd15-vae

@depeschzeu
Copy link
Author

depeschzeu commented Sep 23, 2024

I've put this VAE file to VAE folder and it seems to work sometimes, because I have to restart SD. It doesn't wipe out vram or something....
I also can't see a way to make it work with inpaint. Actually this is the most interesting method giving opportunity to edit selected parts of pictures using prompts. It does SOMETHING, but plugin seems to do something wrong with PS layers. It doesn't return errors or something, but result is a mess, also the rest of original image diappears, turning to pure white except for selected square area. It looks more like a bug with plugin itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants