You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a newbie and I'm facing some bug that I cannot find the solution.
When I use the foreground_to_blend workflow to generate a car poster, the output_picture is very dark and has a mosaic effect.
Even more strangely, if I used the foreground_image generated by LayerDiffuse to generate a new blend_image, such a result would not occurred.
Steps to reproduce the problem
my workflow:
the Prompt and CheckPoint I used:
Output:
What should have happened?
How can I get a better_quality output?I would be extremely grateful if you could help me
Commit where the problem happens
ComfyUI:
ComfyUI-layerdiffuse:
Sysinfo
Graphics Card:Nvidia
OS:Intel
Console logs
Requested to load SDXL
Loading 1 new model
WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3]
What happened?
I am a newbie and I'm facing some bug that I cannot find the solution.
When I use the foreground_to_blend workflow to generate a car poster, the output_picture is very dark and has a mosaic effect.
Even more strangely, if I used the foreground_image generated by LayerDiffuse to generate a new blend_image, such a result would not occurred.
Steps to reproduce the problem
my workflow:
the Prompt and CheckPoint I used:
Output:
What should have happened?
How can I get a better_quality output?I would be extremely grateful if you could help me
Commit where the problem happens
ComfyUI:
ComfyUI-layerdiffuse:
Sysinfo
Graphics Card:Nvidia
OS:Intel
Console logs
Requested to load SDXL Loading 1 new model WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3]
Workflow json file
fg2ble.json
Additional information
No response
The text was updated successfully, but these errors were encountered: