Skip to content

Commit

Permalink
commit GetMainColors and ColorName node, Duplicate Brightness & Contr…
Browse files Browse the repository at this point in the history
…ast node as BrightnessContrastV2, and Color of Shadow & Highlight node as ColorofShadowHighlightV2
  • Loading branch information
chflame163 committed Sep 14, 2024
1 parent 1240cc6 commit 15d3bb1
Show file tree
Hide file tree
Showing 17 changed files with 2,471 additions and 16 deletions.
34 changes: 33 additions & 1 deletion README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,8 @@ When this error has occurred, please check the network environment.
## Update
<font size="4">**If the dependency package error after updating, please double clicking ```repair_dependency.bat``` (for Official ComfyUI Protable) or ```repair_dependency_aki.bat``` (for ComfyUI-aki-v1.x) in the plugin folder to reinstall the dependency packages. </font><br />


* Commit [GetMainColors](#GetMainColors) node, it can obtained 5 main colors of image. Commit [ColorName](#ColorName) node, it can obtain the color name of input color value.
* Duplicate the [Brightness & Contrast](#Brightness) node as [BrightnessContrastV2](#BrightnessContrastV2), and the [Color of Shadow & Highlight](#Highlight) node as [ColorofShadowHighlight](#HighlightV2) to avoid errors in ComfyUI workflow parsing caused by the "&" character in the node name.
* Commit [VQAPrompt](#VQAPrompt) and [LoadVQAModel](#LoadVQAModel) nodes.
Download the model from [BaiduNetdisk](https://pan.baidu.com/s/1ILREVgM0eFJlkWaYlKsR0g?pwd=yw75) or [huggingface.co/Salesforce/blip-vqa-capfilt-large](https://huggingface.co/Salesforce/blip-vqa-capfilt-large/tree/main) and [huggingface.co/Salesforce/blip-vqa-base](https://huggingface.co/Salesforce/blip-vqa-base/tree/main) and copy to ```ComfyUI\models\VQA``` folder.
* [Florence2Ultra](#Florence2Ultra), [Florence2Image2Prompt](#Florence2Image2Prompt)[LoadFlorence2Model](#LoadFlorence2Model) nodes support the MiaoshouAI/Florence-2-large-PromptGen-v1.5 and MiaoshouAI/Florence-2-base-PromptGen-v1.5 model.
Expand Down Expand Up @@ -503,6 +504,8 @@ Node options:
Node option:
* exposure: Exposure value. Higher values indicate brighter image.

### Color of Shadow <a id="table1">HighlightV2</a>
A replica of the ```Color of Shadow & Highlight``` node, with the "&" character removed from the node name to avoid ComfyUI workflow parsing errors.

### <a id="table1">ColorTemperature</a>
![image](image/color_temperature_example.jpg)
Expand Down Expand Up @@ -554,6 +557,9 @@ Node options:
* contrast: Value of contrast.
* saturation: Value of saturation.

### <a id="table1">BrightnessContrastV2</a>
A replica of the ```Brightness & Contrast``` node, with the "&" character removed from the node name to avoid ComfyUI workflow parsing errors.

### <a id="table1">RGB</a>
Adjust the RGB channels of the image.

Expand Down Expand Up @@ -971,6 +977,32 @@ Output:
* image: Solid color picture output, the size is the same as the input picture.
* mask: Mask output.

### <a id="table1">GetMainColors</a>
Obtain the main color of the image. You can obtain 5 colors.
![image](image/get_main_color_and_color_name_example.jpg)
![image](image/get_main_colors_example.jpg)

Node Options:
![image](image/get_main_color_node.jpg)
* image: The image input.
* k_means_algorithm:K-Means algorithm options. "lloyd" is the standard K-Means algorithm, while "elkan" is the triangle inequality algorithm, suitable for larger images.

Outputs:
* preview_image: 5 main color preview images.
* color_1~color_5: Color value output. Output an RGB string in HEX format.

### <a id="table1">ColorName</a>
Output the most similar color name in the color palette based on the color value.
![image](image/color_name_example.jpg)

Node Options:
![image](image/color_name_node.jpg)
* color: Color value input, in HEX format RGB string format.
* palette: Color palette. ```xkcd``` includes 949 colors, ```css3``` includes 147 colors, and ```html4``` includes 16 colors.

Output:
* color_name: Color name in string.

### <a id="table1">ExtendCanvas</a>
Extend the canvas
![image](image/extend_canvas_example.jpg)
Expand Down
33 changes: 33 additions & 0 deletions README_CN.MD
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,8 @@ os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
## 更新说明
<font size="4">**如果本插件更新后出现依赖包错误,请双击运行插件目录下的```install_requirements.bat```(官方便携包),或 ```install_requirements_aki.bat```(秋叶整合包) 重新安装依赖包。

* 添加 [GetMainColors](#GetMainColors) 节点,可获得图片的5个主要颜色。 添加 [ColorName](#ColorName) 节点,可获得颜色名称。
* 复制 [Brightness & Contrast](#Brightness) 节点为 [BrightnessContrastV2](#BrightnessContrastV2), [Color of Shadow & Highlight](#Highlight) 节点为 [ColorofShadowHighlight](#HighlightV2), 避免节点名称中的"&"字符造成ComfyUI工作流解析错误。
* 添加 [VQAPrompt](#VQAPrompt)[LoadVQAModel](#LoadVQAModel) 节点。
请从[百度网盘](https://pan.baidu.com/s/1ILREVgM0eFJlkWaYlKsR0g?pwd=yw75) 或者 [huggingface.co/Salesforce/blip-vqa-capfilt-large](https://huggingface.co/Salesforce/blip-vqa-capfilt-large/tree/main)[huggingface.co/Salesforce/blip-vqa-base](https://huggingface.co/Salesforce/blip-vqa-base/tree/main) 下载全部模型文件并放到 ```ComfyUI\models\VQA```文件夹。
* [Florence2Ultra](#Florence2Ultra), [Florence2Image2Prompt](#Florence2Image2Prompt)[LoadFlorence2Model](#LoadFlorence2Model) 节点支持MiaoshouAI/Florence-2-large-PromptGen-v1.5 和 MiaoshouAI/Florence-2-base-PromptGen-v1.5 模型。
Expand Down Expand Up @@ -497,6 +499,9 @@ os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
* highlight_level_offset: 亮部取值的偏移量,更小的数值使更多靠近阴暗的区域纳入亮部。
* highlight_range: 亮部的过渡范围。

### Color of Shadow <a id="table1">HighlightV2</a>
Color of Shadow & Highlight 节点的复制品,去掉了节点名称中的"&"字符以避免ComfyUI工作流解析错误。

### <a id="table1">ColorTemperature</a>
![image](image/color_temperature_example.jpg)
改变图像的色温。
Expand Down Expand Up @@ -547,6 +552,8 @@ os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
* contrast: 图像的对比度。
* saturation: 图像的色彩饱和度。

### <a id="table1">BrightnessContrastV2</a>
```Brightness & Contrast```节点的复制品,去掉了节点名称中的"&"字符以避免ComfyUI工作流解析错误。

### <a id="table1">RGB</a>
对图像的RGB各通道进行调整。
Expand Down Expand Up @@ -956,6 +963,32 @@ GetColorTone的V2升级版。可以指定获取主体或背景的主色或平均
* image: 纯色图片输出, 尺寸与输入的图片相同。
* mask: 遮罩输出。

### <a id="table1">GetMainColors</a>
获得图片的主色。可获得5个颜色。
![image](image/get_main_color_and_color_name_example.jpg)
![image](image/get_main_colors_example.jpg)

节点选项:
![image](image/get_main_color_node.jpg)
* image: 图片输入。
* k_means_algorithm: K-Means 算法选项。 "lloyd" 为标准K-Means算法, "elkan" 为三角不等式算法,适合更大的图片。

输出:
* preview_image: 5个主色预览图片。
* color_1~color_5: 色值输出。输出格式为HEX格式的RGB字符串。

### <a id="table1">ColorName</a>
根据色值输出调色盘里最近似的颜色名称。
![image](image/color_name_example.jpg)

节点选项:
![image](image/color_name_node.jpg)
* color: 颜色色值输入,格式为HEX格式的RGB字符串。
* palette: 调色板。 ```xkcd```包括了949种颜色, ```css3```包括了147种颜色, ```html4```包括了16种颜色。

输出:
* color_name: 颜色名称,格式为字符串。

### <a id="table1">ExtendCanvas</a>
扩展画布。
![image](image/extend_canvas_example.jpg)
Expand Down
Binary file added image/color_name_example.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added image/color_name_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added image/get_main_color_and_color_name_example.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added image/get_main_color_node.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added image/get_main_colors_example.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
61 changes: 56 additions & 5 deletions py/color_correct_brightness_and_contrast.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
from .imagefunc import *

NODE_NAME = 'Brightness & Contrast'


class ColorCorrectBrightnessAndContrast:

def __init__(self):
pass
self.NODE_NAME = 'Brightness & Contrast'

@classmethod
def INPUT_TYPES(self):
Expand Down Expand Up @@ -48,13 +48,64 @@ def color_correct_brightness_and_contrast(self, image, brightness, contrast, sat
ret_image = RGB2RGBA(ret_image, __image.split()[-1])
ret_images.append(pil2tensor(ret_image))

log(f"{NODE_NAME} Processed {len(ret_images)} image(s).", message_type='finish')
log(f"{self.NODE_NAME} Processed {len(ret_images)} image(s).", message_type='finish')
return (torch.cat(ret_images, dim=0),)


# 节点名称去掉“&”
class LS_ColorCorrect_Brightness_And_Contrast_V2:
def __init__(self):
self.NODE_NAME = 'Brightness Contrast V2'

@classmethod
def INPUT_TYPES(self):

return {
"required": {
"image": ("IMAGE", ), #
"brightness": ("FLOAT", {"default": 1, "min": 0.0, "max": 3, "step": 0.01}),
"contrast": ("FLOAT", {"default": 1, "min": 0.0, "max": 3, "step": 0.01}),
"saturation": ("FLOAT", {"default": 1, "min": 0.0, "max": 3, "step": 0.01}),
},
"optional": {
}
}

RETURN_TYPES = ("IMAGE",)
RETURN_NAMES = ("image",)
FUNCTION = 'color_correct_brightness_contrast_v2'
CATEGORY = '😺dzNodes/LayerColor'

def color_correct_brightness_contrast_v2(self, image, brightness, contrast, saturation):

ret_images = []

for i in image:
i = torch.unsqueeze(i,0)
__image = tensor2pil(i)
ret_image = __image.convert('RGB')
if brightness != 1:
brightness_image = ImageEnhance.Brightness(ret_image)
ret_image = brightness_image.enhance(factor=brightness)
if contrast != 1:
contrast_image = ImageEnhance.Contrast(ret_image)
ret_image = contrast_image.enhance(factor=contrast)
if saturation != 1:
color_image = ImageEnhance.Color(ret_image)
ret_image = color_image.enhance(factor=saturation)

if __image.mode == 'RGBA':
ret_image = RGB2RGBA(ret_image, __image.split()[-1])
ret_images.append(pil2tensor(ret_image))

log(f"{self.NODE_NAME} Processed {len(ret_images)} image(s).", message_type='finish')
return (torch.cat(ret_images, dim=0),)

NODE_CLASS_MAPPINGS = {
"LayerColor: Brightness & Contrast": ColorCorrectBrightnessAndContrast
"LayerColor: Brightness & Contrast": ColorCorrectBrightnessAndContrast,
"LayerColor: BrightnessContrastV2": LS_ColorCorrect_Brightness_And_Contrast_V2
}

NODE_DISPLAY_NAME_MAPPINGS = {
"LayerColor: Brightness & Contrast": "LayerColor: Brightness & Contrast"
"LayerColor: BrightnessContrastV2": "LayerColor: Brightness Contrast V2"
}
124 changes: 120 additions & 4 deletions py/color_correct_shadow_and_highlight.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
from .imagefunc import *

NODE_NAME = 'Color of Shadow & Highlight'


def norm_value(value):
if value < 0.01:
value = 0.01
if value > 0.99:
value = 0.99
return value

class ColorCorrectShadowAndHighlight:

def __init__(self):
pass
self.NODE_NAME = 'Color of Shadow & Highlight'

@classmethod
def INPUT_TYPES(self):
Expand Down Expand Up @@ -118,10 +119,125 @@ def color_shadow_and_highlight(self, image,
log(f"{NODE_NAME} Processed {len(ret_images)} image(s).", message_type='finish')
return (torch.cat(ret_images, dim=0),)


# 名称去掉“&”
class LS_ColorCorrectShadow_And_Highlight_V2:

def __init__(self):
self.NODE_NAME = 'Color of Shadow & Highlight V2'

@classmethod
def INPUT_TYPES(self):

return {
"required": {
"image": ("IMAGE", ),
"shadow_brightness": ("FLOAT", {"default": 1, "min": 0.0, "max": 3, "step": 0.01}),
"shadow_saturation": ("FLOAT", {"default": 1, "min": 0.0, "max": 3, "step": 0.01}),
"shadow_hue": ("INT", {"default": 0, "min": -255, "max": 255, "step": 1}),
"shadow_level_offset": ("INT", {"default": 0, "min": -99, "max": 99, "step": 1}),
"shadow_range": ("FLOAT", {"default": 0.25, "min": 0.01, "max": 0.99, "step": 0.01}),
"highlight_brightness": ("FLOAT", {"default": 1, "min": 0.0, "max": 3, "step": 0.01}),
"highlight_saturation": ("FLOAT", {"default": 1, "min": 0.0, "max": 3, "step": 0.01}),
"highlight_hue": ("INT", {"default": 0, "min": -255, "max": 255, "step": 1}),
"highlight_level_offset": ("INT", {"default": 0, "min": -99, "max": 99, "step": 1}),
"highlight_range": ("FLOAT", {"default": 0.25, "min": 0.01, "max": 0.99, "step": 0.01}),
},
"optional": {
"mask": ("MASK",), #
}
}

RETURN_TYPES = ("IMAGE",)
RETURN_NAMES = ("image",)
FUNCTION = 'color_shadow_and_highlight_v2'
CATEGORY = '😺dzNodes/LayerColor'

def color_shadow_and_highlight_v2(self, image,
shadow_brightness, shadow_saturation,
shadow_level_offset, shadow_range, shadow_hue,
highlight_brightness, highlight_saturation, highlight_hue,
highlight_level_offset, highlight_range,
mask=None
):

ret_images = []
input_images = []
input_masks = []

for i in image:
input_images.append(torch.unsqueeze(i, 0))
m = tensor2pil(i)
if m.mode == 'RGBA':
input_masks.append(m.split()[-1])
else:
input_masks.append(Image.new('L', size=m.size, color='white'))
if mask is not None:
if mask.dim() == 2:
mask = torch.unsqueeze(mask, 0)
input_masks = []
for m in mask:
input_masks.append(tensor2pil(torch.unsqueeze(m, 0)).convert('L'))
max_batch = max(len(input_images), len(input_masks))

for i in range(max_batch):
_image = input_images[i] if i < len(input_images) else input_images[-1]
_image = tensor2pil(_image).convert('RGB')
_mask = input_masks[i] if i < len(input_masks) else input_masks[-1]

avg_gray = get_gray_average(_image, _mask)
shadow_level, highlight_level = calculate_shadow_highlight_level(avg_gray)
_canvas = _image.copy()
if shadow_saturation !=1 or shadow_brightness !=1 or shadow_hue:
shadow_low_threshold = (shadow_level + shadow_level_offset) / 100 + shadow_range / 2
shadow_low_threshold = norm_value(shadow_low_threshold)
shadow_high_threshold = (shadow_level + shadow_level_offset) / 100 - shadow_range / 2
shadow_high_threshold = norm_value(shadow_high_threshold)
_shadow_mask = luminance_keyer(_image, shadow_low_threshold, shadow_high_threshold)
_shadow = _image.copy()
if shadow_brightness != 1:
brightness_image = ImageEnhance.Brightness(_shadow)
_shadow = brightness_image.enhance(factor=shadow_brightness)
if shadow_saturation != 1:
color_image = ImageEnhance.Color(_shadow)
_shadow = color_image.enhance(factor=shadow_saturation)
if shadow_hue:
_h, _s, _v = _shadow.convert('HSV').split()
_h = image_hue_offset(_h, shadow_hue)
_shadow = image_channel_merge((_h, _s, _v), 'HSV')
_canvas.paste(_shadow, mask=gaussian_blur(_shadow_mask,(_shadow_mask.width + _shadow_mask.height)//800))
_canvas.paste(_image, mask=ImageChops.invert(_mask))
if highlight_saturation != 1 or highlight_brightness != 1 or highlight_hue:
highlight_low_threshold = (highlight_level + highlight_level_offset) / 100 - highlight_range / 2
highlight_low_threshold = norm_value(highlight_low_threshold)
highlight_high_threshold = (highlight_level + highlight_level_offset) / 100 + highlight_range / 2
highlight_high_threshold = norm_value(highlight_high_threshold)
_highlight_mask = luminance_keyer(_image, highlight_low_threshold, highlight_high_threshold)
_highlight = _image.copy()
if highlight_brightness != 1:
brightness_image = ImageEnhance.Brightness(_highlight)
_highlight = brightness_image.enhance(factor=highlight_brightness)
if highlight_saturation != 1:
color_image = ImageEnhance.Color(_highlight)
_highlight = color_image.enhance(factor=highlight_saturation)
if highlight_hue:
_h, _s, _v = _highlight.convert('HSV').split()
_h = image_hue_offset(_h, highlight_hue)
_highlight = image_channel_merge((_h, _s, _v), 'HSV')
_canvas.paste(_highlight, mask=gaussian_blur(_highlight_mask, (_highlight_mask.width + _highlight_mask.height)//800))
_canvas.paste(_image, mask=ImageChops.invert(_mask))
ret_images.append(pil2tensor(_canvas))

log(f"{NODE_NAME} Processed {len(ret_images)} image(s).", message_type='finish')
return (torch.cat(ret_images, dim=0),)


NODE_CLASS_MAPPINGS = {
"LayerColor: Color of Shadow & Highlight": ColorCorrectShadowAndHighlight
"LayerColor: Color of Shadow & Highlight": ColorCorrectShadowAndHighlight,
"LayerColor: ColorofShadowHighlightV2": LS_ColorCorrectShadow_And_Highlight_V2
}

NODE_DISPLAY_NAME_MAPPINGS = {
"LayerColor: Color of Shadow & Highlight": "LayerColor: Color of Shadow & Highlight"
"LayerColor: Color of Shadow & Highlight": "LayerColor: Color of Shadow & Highlight",
"LayerColor: ColorofShadowHighlightV2": "LayerColor: Colorof Shadow Highlight V2"
}
Loading

0 comments on commit 15d3bb1

Please sign in to comment.