Skip to content

Commit

Permalink
GetColorToneV2 node add 'mask' method of color_of option
Browse files Browse the repository at this point in the history
  • Loading branch information
chflame163 committed Jul 16, 2024
1 parent 3fe248b commit 8bf4160
Show file tree
Hide file tree
Showing 7 changed files with 58 additions and 27 deletions.
4 changes: 3 additions & 1 deletion README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ When this error has occurred, please check the network environment.
## Update
<font size="4">**If the dependency package error after updating, please reinstall the relevant dependency packages. </font><br />

* [GetColorToneV2](#GetColorToneV2) node add the ```mask``` method to the color selection option, which can accurately obtain the main color and average color within the mask.
* [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) node add the "background_color" option.
* [LUT Apply](#LUT) Add the "strength" option.
* Commit [AutoAdjustV2](#AutoAdjustV2) node, add optional mask input and support for multiple automatic color adjustment modes.
Expand Down Expand Up @@ -814,13 +815,14 @@ V2 upgrade of GetColorTone. You can specify the dominant or average color to get

The following changes have been made on the basis of GetColorTong:
![image](image/get_color_tone_v2_node.jpg)
* color_of: Provides three options, entire, background, and subject, to select the color of the entire picture, background, or body, respectively.
* color_of: Provides 4 options, mask, entire, background, and subject, to select the color of the mask area, entire picture, background, or subject, respectively.
* remove_background_method: There are two methods of background recognition: BiRefNet and RMBG V1.4.
* invert_mask: Whether to reverse the mask.
* mask_grow: Mask expansion. For subject, a larger value brings the obtained color closer to the color at the center of the body.

Output:
* image: Solid color picture output, the size is the same as the input picture.
* mask: Mask output.

### <a id="table1">ExtendCanvas</a>
Extend the canvas
Expand Down
8 changes: 5 additions & 3 deletions README_CN.MD
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ git clone https://github.com/chflame163/ComfyUI_LayerStyle.git
## 更新说明
<font size="4">**如果本插件更新后出现依赖包错误,请重新安装相关依赖包。

* [GetColorToneV2](#GetColorToneV2) 节点的取色选项增加```mask```方法,可精确获取遮罩内的主色和平均色。
* [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) 节点增加background_color选项。
* [LUT Apply](#LUT) 节点增加strenght选项。
* 添加 [AutoAdjustV2](#AutoAdjustV2) 节点,增加可选遮罩输入,增加多种自动调色模式支持。
Expand Down Expand Up @@ -800,18 +801,19 @@ ImageScaleByAspectRatio的V2升级版

### <a id="table1">GetColorToneV2</a>
GetColorTone的V2升级版。可以指定获取主体或背景的主色或平均色。
![image](image/get_color_tone_v2_example.jpg)
![image](image/get_color_tone_v2_example.jpg)
![image](image/get_color_tone_v2_example2.jpg)

在GetColorTong基础上做了如下改变:
![image](image/get_color_tone_v2_node.jpg)
* color_of: 提供三个选项,entire, background和subject, 分别表示选择整个图片,背景,或主体的颜色。
* color_of: 提供4个选项,mask, entire, background和subject, 分别表示选择遮罩区域,整个图片,背景,或主体的颜色。
* remove_background_method: 背景识别的方法, 有BiRefNet和RMBG V1.4两种可以选择。
* invert_mask: 是否反转遮罩。
* mask_grow: 遮罩扩张。对于subject, 更大的值使获得的颜色更接近主体中心的颜色。

输出:
* image: 纯色图片输出, 尺寸与输入的图片相同。

* mask: 遮罩输出。

### <a id="table1">ExtendCanvas</a>
扩展画布。
Expand Down
Binary file added image/get_color_tone_v2_example2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified image/get_color_tone_v2_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
40 changes: 25 additions & 15 deletions py/get_color_tone_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ def __init__(self):

@classmethod
def INPUT_TYPES(self):
remove_background_list = ['BiRefNet', 'RMBG 1.4',]
subject_list = ['entire', 'background', 'subject']
remove_background_list = ['none','BiRefNet', 'RMBG 1.4',]
subject_list = ['mask','entire', 'background', 'subject']
mode_list = ['main_color', 'average']
return {
"required": {
Expand All @@ -19,15 +19,15 @@ def INPUT_TYPES(self):
"color_of": (subject_list,),
"remove_bkgd_method": (remove_background_list,),
"invert_mask": ("BOOLEAN", {"default": False}), # 反转mask#
"mask_grow": ("INT", {"default": 16, "min": -999, "max": 999, "step": 1}),
"mask_grow": ("INT", {"default": 0, "min": -999, "max": 999, "step": 1}),
},
"optional": {
"mask": ("MASK",), #
}
}

RETURN_TYPES = ("IMAGE", "STRING", "LIST",)
RETURN_NAMES = ("image", "color_in_hex", "HSV color in list",)
RETURN_TYPES = ("IMAGE", "STRING", "LIST", "MASK")
RETURN_NAMES = ("image", "color_in_hex", "HSV color in list", "mask",)
FUNCTION = 'get_color_tone_v2'
CATEGORY = '😺dzNodes/LayerUtility'

Expand All @@ -38,21 +38,23 @@ def get_color_tone_v2(self, image, mode, remove_bkgd_method, color_of, invert_ma
_images = []
_masks = []
ret_images = []
ret_masks = []
need_rmbg = False
for i in image:
_images.append(torch.unsqueeze(i, 0))
m = tensor2pil(i)
if m.mode == 'RGBA':
_masks.append(1 - image2mask(m.split()[-1]))
else:
need_rmbg = True
_masks.append(pil2tensor(Image.new("L", (m.width, m.height), color="white")))
if remove_bkgd_method != 'none':
need_rmbg = True

if mask is not None:
if mask.dim() == 2:
mask = torch.unsqueeze(mask, 0)
_masks = []
for m in mask:
if not invert_mask:
m = 1 - m
_masks.append(torch.unsqueeze(m, 0))
need_rmbg = False

Expand All @@ -71,32 +73,40 @@ def get_color_tone_v2(self, image, mode, remove_bkgd_method, color_of, invert_ma
else:
_mask = RMBG(_image)
_mask = image2mask(_mask)
_mask = 1 - _mask
else:
_mask = _masks[i] if i < len(_masks) else _masks[-1]

if invert_mask:
_mask = 1 - _mask

if mask_grow != 0:
_mask = expand_mask(_mask, mask_grow, 0) # 扩张,模糊

if color_of == 'subject':
_mask = 1 - _mask

if color_of == 'entire':
blured_image = gaussian_blur(_image, int((_image.width + _image.height) / 400))
else:
if color_of == 'background':
_mask = 1 - _mask
_mask = tensor2pil(_mask)
pixel_spread_image = pixel_spread(_image, _mask.convert('RGB'))
blured_image = gaussian_blur(pixel_spread_image, int((_image.width + _image.height) / 400))
if mode == 'main_color':

ret_color = '#000000'
if mode == 'main_color' and color_of != 'mask':
ret_color = get_image_color_tone(blured_image)
else:
elif mode == 'average' and color_of != 'mask':
ret_color = get_image_color_average(blured_image)
elif mode == 'main_color' and color_of == 'mask':
ret_color = get_image_color_tone(blured_image, mask=_mask)
elif mode == 'average' and color_of == 'mask':
ret_color = get_image_color_average(blured_image, mask=_mask)

ret_image = Image.new('RGB', size=_image.size, color=ret_color)
ret_images.append(pil2tensor(ret_image))
ret_masks.append(pil2tensor(_mask))
log(f"{NODE_NAME} Processed {len(ret_images)} image(s).", message_type='finish')
hsv_color = RGB_to_HSV(Hex_to_RGB(ret_color))
return (torch.cat(ret_images, dim=0), ret_color, hsv_color,)
return (torch.cat(ret_images, dim=0), ret_color, hsv_color, torch.cat(ret_masks, dim=0),)

NODE_CLASS_MAPPINGS = {
"LayerUtility: GetColorToneV2": GetColorToneV2
Expand Down
31 changes: 24 additions & 7 deletions py/imagefunc.py
Original file line number Diff line number Diff line change
Expand Up @@ -893,12 +893,22 @@ def adjust_levels(image:Image, input_black:int=0, input_white:int=255, midtones:
img = img.astype(np.uint8)
return cv22pil(img)


def get_image_color_tone(image:Image) -> str:
def get_image_color_tone(image:Image, mask:Image=None) -> str:
image = image.convert('RGB')
max_score = 0.0001
dominant_color = (255, 255, 255)
for count, (r, g, b) in image.getcolors(image.width * image.height):
if mask is not None:
if mask.mode != 'L':
mask = mask.convert('L')
canvas = Image.new('RGB', size=image.size, color='black')
canvas.paste(image, mask=mask)
image = canvas

all_colors = image.getcolors(image.width * image.height)
for count, (r, g, b) in all_colors:
if mask is not None:
if r + g + b < 2: # 忽略黑色
continue
saturation = colorsys.rgb_to_hsv(r / 255.0, g / 255.0, b / 255.0)[1]
y = min(abs(r * 2104 + g * 4130 + b * 802 + 4096 + 131072) >> 13,235)
y = (y - 16.0) / (235 - 16)
Expand All @@ -909,22 +919,29 @@ def get_image_color_tone(image:Image) -> str:
ret_color = RGB_to_Hex(dominant_color)
return ret_color

def get_image_color_average(image:Image) -> str:
def get_image_color_average(image:Image, mask:Image=None) -> str:
image = image.convert('RGB')
width, height = image.size
total_red = 0
total_green = 0
total_blue = 0
total_pixel =0
for y in range(height):
for x in range(width):
if mask is not None:
if mask.mode != 'L':
mask = mask.convert('L')
if mask.getpixel((x, y)) <= 127:
continue
rgb = image.getpixel((x, y))
total_red += rgb[0]
total_green += rgb[1]
total_blue += rgb[2]
total_pixel += 1

average_red = total_red // (width * height)
average_green = total_green // (width * height)
average_blue = total_blue // (width * height)
average_red = total_red // total_pixel
average_green = total_green // total_pixel
average_blue = total_blue // total_pixel
color = (average_red, average_green, average_blue)
ret_color = RGB_to_Hex(color)
return ret_color
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[project]
name = "comfyui_layerstyle"
description = "A set of nodes for ComfyUI it generate image like Adobe Photoshop's Layer Style. the Drop Shadow is first completed node, and follow-up work is in progress."
version = "1.0.12"
version = "1.0.13"
license = "MIT"
dependencies = ["numpy", "pillow", "torch", "matplotlib", "Scipy", "scikit_image", "opencv-contrib-python", "pymatting", "segment_anything", "timm", "addict", "yapf", "colour-science", "wget", "mediapipe", "loguru", "typer_config", "fastapi", "rich", "google-generativeai", "diffusers", "omegaconf", "tqdm", "transformers", "kornia", "image-reward", "ultralytics", "blend_modes", "blind-watermark", "qrcode", "pyzbar", "psd-tools"]

Expand Down

0 comments on commit 8bf4160

Please sign in to comment.