-
Notifications
You must be signed in to change notification settings - Fork 27.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Soft Inpainting #14208
Soft Inpainting #14208
Conversation
…ional values (e.g. 0.5) are accepted.
…ith a naive blend with the original latents.
…tent blending formula that preserves details in blend transition areas.
…ss offset parameter, changed labels in UI and for saving to/pasting data from PNG files.
…isual difference between the original and modified latent images. This should remove ghosting and clipping artifacts from masks, while preserving the details of largely unchanged content.
… "del" references with the intention of minimizing allocations and easing garbage collection.
# Conflicts: # modules/processing.py
… "ValueError: Images do not match" *shudder*
…g the feature, and centralizes default values to reduce the amount of copy-pasta.
…t to be used as a reference.
…ecause of mismatched tensor sizes. The 'already_decoded' decoded case should also be handled correctly (tested indirectly).
# Conflicts: # modules/processing.py
Looks like something that should have been there since the start! Given that change it would also make sense to have a soft brush in the UI now, that might to wonders to blend some inpaint in. Currently it's quite a pain with tons of low intensity steps needed |
I'll have you know that when I set out to generate these particular examples, they were all first attempts, no cherry picking. Want me to generate a grid? ;;} |
I meant that you can do that kind of masking properly by using a smaller cfg value and more steps, so it gradually inpaints. Your solution is what I wanted from begin on. Very important for great inpainting :) |
… soft_inpainting.
Thank you for merging! ::> |
@CodeHatchling are you the sole author of the approach or did you use ideas from someone's paper? |
Can anyone summarize this feature for me? Does this mean that with this feature, we no longer need a separate inpainting checkpoint? For example, normally I have two separate models: one is Abcmodel.safetensor as a normal checkpoint, and the other is Abcmodel_inpainting.safetensor as an inpainting checkpoint
Thank you. |
You can make any checkpoint be inpainting in checkpoint merger tab: https://github.com/light-and-ray/sd-webui-replacer?tab=readme-ov-file#how-to-get-an-inpainting-model This soft inpainitng, as I understand, can avoid sharp contours, and still requires inpainting model |
Nope! Works with any model. It kind of acts as an alternative to an inpainting model or a controlnet inpainting module - at least, from what others have told me (I'm not that familiar with those two features). |
It was designed to work with any model. I tested it with the built-in SD 1.5 model that A1111 comes with, and have tried it with a variety of others. I'm not sure how well it will work with an inpainting model specifically, but the extension also passes in a non-binary mask to them as conditioning. To use it, just provide a mask. Black (0) pixels will not be changed, white (1) pixels will be processed by the diffuser, and any in-between shades of grey (0 to 1) will define the transition region in which the image is only partially denoised. It works better at helping the features imagined by the denoiser blend naturally with the unmasked original content. |
What settings do you use to get this to work? I've tested it with high denoising ranging from 0.5-1 and even tested it with a mask blur rang of 8-64. I am on the inpainting tab and have also tried inpaint sketch. Is there a step I am missing? I've looked all over for more info and haven't found anything that gives clear instructions on it's usage. Any help is appreciated!~ Thanks! <3 |
Hi. Nice work. |
i got this error: why? |
The error message you're encountering indicates a runtime warning in the file soft_inpainting.py at line 161. Specifically, it's warning about a division by zero encountered in the expression converted_mask / half_weighted_distance. This warning typically occurs when attempting to divide by zero, which can happen if half_weighted_distance is zero or near zero. This situation might be due to incorrectly setting an input to zero, or if there is not inpaint mask? A screenshot of your settings for inpainting would help answer this quest better. Anyone else can chime in. If they know what it is. |
Out of curiosity: Can this 'soft inpainting' also be used for upscaling to steer the denoising? It would be awesome to have precise control over what and where a change should be made when upscaling. |
It's not really an "upscaler" but you can increase the resolution of your inpainting by adjusting the Width x Hight settings to 768x768 or higher. But your render time goes up the higher the resolution you set. Hope that helps. |
Hi all, |
Is there a way to use it through the API ? |
Yes, look here : #15138 |
Oh, this is likely just due to the difference threshold value being set to 0, I believe. The intended outcome for a threshold of 0 was if a given latent pixel changed at all, the changed latent is included in the resulting composite with full opacity. |
Hello. I too want to use this through an API, which I see from this reply is possible, however, can I supply my own blurry mask? That is, treat the mask as a ready "map" with gray values that I prepare, and skip any additional blurring that the code would usually perform. Thanks! |
May I ask if it can be used on comfyui? |
Description
Soft inpainting allows the denoiser to work directly with soft-edged (i.e. non-binary) masks, whereby unmasked content is blended seamlessly with inpainted content with gradual transitions. It is conceptually similar to per-pixel denoising strength.
Code changes
Fixes
#14024
Summary:
Screenshots/videos:
Checklist:
Notes:
Why not an extension?
Implementing this required integration directly with processing.py and the denoising process. There are no "hooks" for extensions to intervene and modify their behaviour in the way that is needed. For example:
Implementing this as an extension would require duplicating a large chunk of the code, and would likely provide a suboptimal user experience.
Mild concerns
mask_for_overlay
andmasks_for_overlay
is used depending on whether soft inpainting is used. The intent was to minimize differences in behaviour for extensions already built around the vanilla inpainting. However, this could lead to confusion in the future, and ideally the attributes should be unified, as there was already a high number of attributes containg versions of the mask at different processing stages.samples_ddim
would have the conditionalready_decoded
, but I did attempt to handle that case nonetheless.Planned work