Skip to content

Commit

Permalink
commit ImageToMask node, change the blackpoint and whitepoint options…
Browse files Browse the repository at this point in the history
… to slider for some nodes.
  • Loading branch information
chflame163 committed Jun 10, 2024
1 parent 5a980f1 commit b1b7a6c
Show file tree
Hide file tree
Showing 28 changed files with 5,273 additions and 21 deletions.
19 changes: 19 additions & 0 deletions README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,8 @@ When this error has occurred, please check the network environment.
## Update
<font size="4">**If the dependency package error after updating, please reinstall the relevant dependency packages. </font><br />

* Commit [ImageToMask](#ImageToMask) node, it can be converted image into mask. Supports converting any channel in LAB, RGBA, YUV, and HSV modes into masks, while providing color scale adjustment. Support mask optional input to obtain masks that only include valid parts.
* The blackpoint and whitepoint options in some nodes have been changed to slider adjustment for a more intuitive display. Include [MaskEdgeUltraDetailV2](#MaskEdgeUltraDetailV2), [SegmentAnythingUltraV2](#SegmentAnythingUltraV2), [RmBgUltraV2](#RmBgUltraV2)[PersonMaskUltraV2](#PersonMaskUltraV2)[BiRefNetUltra](#BiRefNetUltra), [SegformerB2ClothesUltra](#SegformerB2ClothesUltra), [BlendIfMask](#BlendIfMask) and [Levels](#Levels).
* [ImageScaleRestoreV2](#ImageScaleRestoreV2) and [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) nodes add the ```total_pixel``` method to scale images.
* Commit [MediapipeFacialSegment](#MediapipeFacialSegment) node,Used to segment facial features, including left and right eyebrows, eyes, lips, and teeth.
* Commit [BatchSelector](#BatchSelector) node,Used to retrieve specified images or masks from batch images or masks.
Expand Down Expand Up @@ -1432,6 +1434,23 @@ Node Options:
* fix_threshold: The threshold for repairing masks.
* invert_mask: Whether to reverse the mask.

### <a id="table1">ImageToMask</a>
Convert the image to a mask. Supports converting any channel in LAB, RGBA, YUV, and HSV modes into masks, while providing color scale adjustment. Support mask optional input to obtain masks that only include valid parts.
![image](image/image_to_mask_example.jpg)

Node Options:
![image](image/image_to_mask_node.jpg)
* image: Input image.
* mask: This input is optional, if there is a mask, only the colors inside the mask are included in the range.
* channel: Channel selection. You can choose any channel of LAB, RGBA, YUV, or HSV modes.
* black_point<sup>*</sup>: Black dot value for the mask. The value range is 0-255, with a default value of 0.
* white_point<sup>*</sup>: White dot value for the mask. The value range is 0-255, with a default value of 255.
* gray_point: Gray dot values for the mask. The value range is 0.01-9.99, with a default of 1.
* invert_output_mask: Whether to reverse the mask.

<sup>*</sup><font size="3">If the black_point or output_black_point value is greater than white_point or output_white_point, the two values are swapped, with the larger value used as white_point and the smaller value used as black_point.</font>


### <a id="table1">Shadow</a> & Highlight Mask
Generate masks for the dark and bright parts of the image.
![image](image/shadow_and_highlight_mask_example.jpg)
Expand Down
19 changes: 19 additions & 0 deletions README_CN.MD
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,8 @@ git clone https://github.com/chflame163/ComfyUI_LayerStyle.git
## 更新说明
<font size="4">**如果本插件更新后出现依赖包错误,请重新安装相关依赖包。

* 添加 [ImageToMask](#ImageToMask) 节点,可将图片转为遮罩。支持以LAB,RGBA, YUV 和 HSV模式的任意通道转换为遮罩,同时提供色阶调整。支持mask可选输入以获取仅包括有效部分的遮罩。
* 部分节点中blackpoint和whitepoint选项改为滑块调节,便于更直观显示。包括[MaskEdgeUltraDetailV2](#MaskEdgeUltraDetailV2), [SegmentAnythingUltraV2](#SegmentAnythingUltraV2), [RmBgUltraV2](#RmBgUltraV2)[PersonMaskUltraV2](#PersonMaskUltraV2)[BiRefNetUltra](#BiRefNetUltra), [SegformerB2ClothesUltra](#SegformerB2ClothesUltra), [BlendIfMask](#BlendIfMask)[Levels](#Levels)
* [ImageScaleRestoreV2](#ImageScaleRestoreV2)[ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) 节点增加TotalPixel方法缩放图片。
* 添加 [MediapipeFacialSegment](#MediapipeFacialSegment) 节点, 用于分割面部五官,包括左右眉、眼睛、嘴唇和牙齿。
* 添加 [BatchSelector](#BatchSelector) 节点, 用于从批量图片或遮罩中获取指定的图片或遮罩。
Expand Down Expand Up @@ -1417,6 +1419,23 @@ PersonMaskUltra的V2升级版,增加了VITMatte边缘处理方法。(注意:
* fix_threshold: 修补遮罩的阈值。
* invert_mask: 是否反转遮罩。

### <a id="table1">ImageToMask</a>
将图片转为遮罩。支持以LAB,RGBA, YUV 和 HSV模式的任意通道转换为遮罩,同时提供色阶调整。支持mask可选输入以获取仅包括有效部分的遮罩。
![image](image/image_to_mask_example.jpg)

节点选项说明:
![image](image/image_to_mask_node.jpg)
* image: 图像输入。
* mask: 遮罩输入。此输入是可选项,如果有遮罩则仅遮罩内的颜色被纳入范围。
* channel: 通道选择。可以选择LAB,RGBA, YUV 和 HSV模式的任意一个通道。
* black_point<sup>*</sup>: 遮罩黑点值。取值范围0-255, 默认值0。
* white_point<sup>*</sup>: 遮罩白点值。取值范围0-255, 默认值255。
* gray_point: 遮罩灰点值。取值范围0.01-9.99, 默认1。
* invert_output_mask: 是否反转遮罩。

<sup>*</sup><font size="3">如果 black_point 或 output_black_point 数值大于 white_point 或 output_white_point,则两个数值将交换,较大的数值作为white_point使用,较小的数值作为black_point使用。</font>



### <a id="table1">Shadow</a> & Highlight Mask
生成图像暗部和亮部的遮罩。
Expand Down
Binary file modified image/birefnet_ultra_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/blendif_mask_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added image/image_to_mask_example.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added image/image_to_mask_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/layercolor_nodes.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/layermask_nodes.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/layerutility_nodes.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/levels_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/mask_edge_ultra_detail_v2_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/person_mask_ultra_v2_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/rmbg_ultra_v2_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/segformer_ultra_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/segment_anything_ultra_v2_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions py/birefnet_ultra.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ def INPUT_TYPES(cls):
"detail_method": (method_list,),
"detail_erode": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"detail_dilate": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01, "display": "slider"}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01, "display": "slider"}),
"process_detail": ("BOOLEAN", {"default": True}),
},
"optional": {
Expand Down
4 changes: 2 additions & 2 deletions py/blend_if_mask.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ def INPUT_TYPES(self):
"image": ("IMAGE", ),
"invert_mask": ("BOOLEAN", {"default": True}), # 反转mask
"blend_if": (blend_if_list,),
"black_point": ("INT", {"default": 0, "min": 0, "max": 254, "step": 1}),
"black_point": ("INT", {"default": 0, "min": 0, "max": 254, "step": 1, "display": "slider"}),
"black_range": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
"white_point": ("INT", {"default": 255, "min": 1, "max": 255, "step": 1}),
"white_point": ("INT", {"default": 255, "min": 1, "max": 255, "step": 1, "display": "slider"}),
"white_range": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
},
"optional": {
Expand Down
4 changes: 2 additions & 2 deletions py/color_correct_levels.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ def INPUT_TYPES(self):
"required": {
"image": ("IMAGE", ), #
"channel": (channel_list,),
"black_point": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
"white_point": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
"black_point": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1, "display": "slider"}),
"white_point": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1, "display": "slider"}),
"gray_point": ("FLOAT", {"default": 1, "min": 0.01, "max": 9.99, "step": 0.01}),
"output_black_point": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
"output_white_point": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
Expand Down
107 changes: 107 additions & 0 deletions py/image_to_mask.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
from .imagefunc import *

NODE_NAME = 'ImageToMask'

class ImageToMask:
@classmethod
def INPUT_TYPES(s):
channel_list = ["L(LAB)", "A(Lab)", "B(Lab)",
"R(RGB)", "G(RGB)", "B(RGB)", "alpha",
"Y(YUV)", "U(YUV)", "V(YUV)",
"H(HSV)", "S(HSV", "V(HSV)"]
return {
"required": {
"image": ("IMAGE", ),
"channel": (channel_list,),
"black_point": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1, "display": "slider"}),
"white_point": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1, "display": "slider"}),
"gray_point": ("FLOAT", {"default": 1.0, "min": 0.01, "max": 9.99, "step": 0.01}),
"invert_output_mask": ("BOOLEAN", {"default": False}), # 反转mask
},
"optional": {
"mask": ("MASK",), #
}
}

RETURN_TYPES = ("MASK",)
RETURN_NAMES = ("mask",)
FUNCTION = "image_to_mask"
CATEGORY = '😺dzNodes/LayerMask'

def image_to_mask(self, image, channel,
black_point, white_point, gray_point,
invert_output_mask, mask=None
):

ret_masks = []
l_images = []
l_masks = []

for l in image:
l_images.append(torch.unsqueeze(l, 0))
m = tensor2pil(l)
if m.mode == 'RGBA':
l_masks.append(m.split()[-1])
else:
l_masks.append(Image.new('L', m.size, 'white'))
if mask is not None:
if mask.dim() == 2:
mask = torch.unsqueeze(mask, 0)
l_masks = []
for m in mask:
l_masks.append(tensor2pil(torch.unsqueeze(m, 0)).convert('L'))

for i in range(len(l_images)):
orig_image = l_images[i] if i < len(l_images) else l_images[-1]
orig_image = tensor2pil(orig_image)
orig_mask = l_masks[i] if i < len(l_masks) else l_masks[-1]

mask = Image.new('L', orig_image.size, 'black')
if channel == "L(LAB)":
mask, _, _, _ = image_channel_split(orig_image, 'LAB')
elif channel == "A(Lab)":
_, mask, _, _ = image_channel_split(orig_image, 'LAB')
elif channel == "B(Lab)":
_, _, mask, _ = image_channel_split(orig_image, 'LAB')
elif channel == "R(RGB)":
mask, _, _, _ = image_channel_split(orig_image, 'RGB')
elif channel == "G(RGB)":
_, mask, _, _ = image_channel_split(orig_image, 'RGB')
elif channel == "B(RGB)":
_, _, mask, _ = image_channel_split(orig_image, 'RGB')
elif channel == "alpha":
_, _, _, mask = image_channel_split(orig_image, 'RGBA')
elif channel == "Y(YUV)":
mask, _, _, _ = image_channel_split(orig_image, 'YCbCr')
elif channel == "U(YUV)":
_, mask, _, _ = image_channel_split(orig_image, 'YCbCr')
elif channel == "V(YUV)":
_, _, mask, _ = image_channel_split(orig_image, 'YCbCr')
elif channel == "H(HSV)":
mask, _, _, _ = image_channel_split(orig_image, 'HSV')
elif channel == "S(HSV)":
_, mask, _, _ = image_channel_split(orig_image, 'HSV')
elif channel == "V(HSV)":
_, _, mask, _ = image_channel_split(orig_image, 'HSV')
mask = normalize_gray(mask)
mask = adjust_levels(mask, black_point, white_point, gray_point,
0, 255)
if invert_output_mask:
mask = ImageChops.invert(mask)
ret_mask = Image.new('L', mask.size, 'black')
ret_mask.paste(mask, mask=orig_mask)

ret_mask = image2mask(ret_mask)

ret_masks.append(ret_mask)

return (torch.cat(ret_masks, dim=0), )


NODE_CLASS_MAPPINGS = {
"LayerMask: ImageToMask": ImageToMask
}

NODE_DISPLAY_NAME_MAPPINGS = {
"LayerMask: ImageToMask": "LayerMask: Image To Mask"
}
4 changes: 2 additions & 2 deletions py/mask_edge_ultra_detail_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ def INPUT_TYPES(cls):
"fix_threshold": ("FLOAT", {"default": 0.75, "min": 0.01, "max": 0.99, "step": 0.01}),
"edge_erode": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"edte_dilate": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01, "display": "slider"}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01, "display": "slider"}),
},
"optional": {
}
Expand Down
4 changes: 2 additions & 2 deletions py/person_mask_ultra_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ def INPUT_TYPES(self):
"detail_method": (method_list,),
"detail_erode": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"detail_dilate": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01, "display": "slider"}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01, "display": "slider"}),
"process_detail": ("BOOLEAN", {"default": True}),
},
"optional":
Expand Down
4 changes: 2 additions & 2 deletions py/rembg_ultra_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ def INPUT_TYPES(cls):
"detail_method": (method_list,),
"detail_erode": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"detail_dilate": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01, "display": "slider"}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01, "display": "slider"}),
"process_detail": ("BOOLEAN", {"default": True}),
},
"optional": {
Expand Down
4 changes: 2 additions & 2 deletions py/segformer_ultra.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ def INPUT_TYPES(cls):
"detail_method": (method_list,),
"detail_erode": ("INT", {"default": 12, "min": 1, "max": 255, "step": 1}),
"detail_dilate": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01}),
"black_point": ("FLOAT", {"default": 0.01, "min": 0.01, "max": 0.98, "step": 0.01, "display": "slider"}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01, "display": "slider"}),
"process_detail": ("BOOLEAN", {"default": True}),
}
}
Expand Down
4 changes: 2 additions & 2 deletions py/segment_anything_ultra_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ def INPUT_TYPES(cls):
"detail_method": (method_list,),
"detail_erode": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"detail_dilate": ("INT", {"default": 6, "min": 1, "max": 255, "step": 1}),
"black_point": ("FLOAT", {"default": 0.15, "min": 0.01, "max": 0.98, "step": 0.01}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01}),
"black_point": ("FLOAT", {"default": 0.15, "min": 0.01, "max": 0.98, "step": 0.01, "display": "slider"}),
"white_point": ("FLOAT", {"default": 0.99, "min": 0.02, "max": 0.99, "step": 0.01, "display": "slider"}),
"process_detail": ("BOOLEAN", {"default": True}),
"prompt": ("STRING", {"default": "subject"}),
},
Expand Down
8 changes: 4 additions & 4 deletions py/text_join.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ def __init__(self):
def INPUT_TYPES(cls):
return {
"required": {
"text_1": ("STRING", {"multiline": True}),
"text_1": ("STRING", {"multiline": False}),

},
"optional": {
"text_2": ("STRING", {"multiline": True}),
"text_3": ("STRING", {"multiline": True}),
"text_4": ("STRING", {"multiline": True}),
"text_2": ("STRING", {"multiline": False}),
"text_3": ("STRING", {"multiline": False}),
"text_4": ("STRING", {"multiline": False}),
}
}

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[project]
name = "comfyui_layerstyle"
description = "A set of nodes for ComfyUI it generate image like Adobe Photoshop's Layer Style. the Drop Shadow is first completed node, and follow-up work is in progress."
version = "1.0.3"
version = "1.0.4"
license = "MIT"
dependencies = ["numpy", "pillow", "torch", "matplotlib", "Scipy", "scikit_image", "opencv-contrib-python", "pymatting", "segment_anything", "timm", "addict", "yapf", "colour-science", "wget", "mediapipe", "loguru", "typer_config", "fastapi", "rich", "google-generativeai", "diffusers", "omegaconf", "tqdm", "transformers", "kornia", "image-reward", "ultralytics", "blend_modes", "blind-watermark", "qrcode", "pyzbar", "psd-tools"]

Expand Down
Loading

0 comments on commit b1b7a6c

Please sign in to comment.