Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev quant tools and fix graph file management #495

Merged
merged 29 commits into from
Jan 16, 2024
Merged

Dev quant tools and fix graph file management #495

merged 29 commits into from
Jan 16, 2024

Conversation

onediff_comfy_nodes/__init__.py Outdated Show resolved Hide resolved
onediff_comfy_nodes/_quant_tools.py Outdated Show resolved Hide resolved
onediff_comfy_nodes/_nodes.py Outdated Show resolved Hide resolved
onediff_comfy_nodes/_quant_tools.py Outdated Show resolved Hide resolved
onediff_comfy_nodes/_quant_tools.py Outdated Show resolved Hide resolved
onediff_comfy_nodes/_quant_tools.py Outdated Show resolved Hide resolved
onediff_comfy_nodes/_nodes.py Outdated Show resolved Hide resolved
Comment on lines 208 to 210
mse = torch.mean((org_latent_sample - cur_latent_sample) ** 2)

ssim = 1 - mse
Copy link
Contributor Author

@ccssu ccssu Jan 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSE : (原始模型生成的潜在样本空间 vs 量化模型其中一层生成的潜在样本空间)

sdxl 量化结果实验 可以使用 1-mse 代表 比较重建图片的 ssim 值。
sdxl_base_1_0_key_ssim_mse.txt

模型的参数名称  ssim 值  1-mse 值

time_embed.0 0.9756851169369525 0.9671265296638012
time_embed.2 0.9616645683538984 0.8562246561050415
label_emb.0.0 0.97451926291366 0.9558620862662792

下图所有 ssim < 0.97 的值 ( ssim = 原始模型生成图片 vs 量化模型其中一层生成的图片)
ssim 值 和 1- mse 值 变化趋势是一致的,具有相关性。
ssim_1-mse_0 97

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

下图所有 ssim < 0.98 的值 ( ssim = 原始模型生成图片 vs 量化模型其中一层生成的图片)
ssim_1-mse_0 98

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看起来mse指标分布更广,区分度更高一些,可以尝试用mse替换ssim量化一下sdxl + deepcache看看效果

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

左图 sdxl + deepcache 右图 sdxl + 量化+ deepcache

  • Quantized conv: 3
  • Quantized linear: 582
  • image: 1024x1024 , steps: 20
image image

onediff_comfy_nodes/_nodes.py Outdated Show resolved Hide resolved
count = len(
[v for v in args_tree.iter_nodes() if isinstance(v, flow.Tensor)]
)
return f"{graph_file}_{count}.graph"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

选择在 f"{graph_file} 后加个输入参数的数量。 @strint

Load Checkpoint - OneDiff 节点 添加个 指定图文件的参数,也无法解决下面工作流的错误。
test_02

Copy link
Contributor Author

@ccssu ccssu Jan 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check failed: (x== y) 错误

使用Oendiff 的一个⚠️注意点,最佳实践是确保每个数据加载器(loader)只与一个独立的采样器(ksampler)相对应。
user_00

Copy link
Contributor Author

@ccssu ccssu Jan 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注意点2: 在 ControlNet 节点中,strength 参数不支持值为0 和大于0的之间的切换。 0 代表相当于取消 ControlNet 作用。 大于0的值代表使用 ControlNet。 请不要在 0 和 大于0的之间切换,否则会出现错误。

@ccssu ccssu changed the title Dev quant tools Dev quant tools and fix graph file management Jan 15, 2024
Comment on lines 222 to 223
if index % 10 == 0 or index == length - 1:
torch.save(calibrate_info, cached_process_output_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个cache行为需要用户显式指定,否则可能会导致写过多的文件和目录?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个cache行为需要用户显式指定,否则可能会导致写过多的文件和目录?

这里 cache 对应 onediff-quant 量化脚本中 quantize_info_and_relevance_metrics.pt 文件。

https://github.com/siliconflow/onediff-quant/blob/dc8a3ba8e553ba43d269b701f2b16aea288a78ff/tools/quantize-sd-fast-with-threshold.py#L125

是加个 cached_process_output:["enable","disable"] ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我是觉得comfy的工具不一定需要支持cache,如果是为了让用户可以调那几个量化参数,大不了把calibrate_info在内存里复制一份,每次改完参数都生成一个新的calibrate_info输出到用户指定的目录,而原始的calibrate_info是不会被修改的。

如果非要cache的话,我有几个问题:

  • 切换模型会不会导致被误用,如何判断cache是否会被命中?
  • cache的目录什么时候被清空,需不需要用户手动操作,用户如何知道该清空哪个目录?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

)

calibrate_info[sub_name] = {
"ssim": similarity_mse,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ssim就叫mse会比较准确

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

有个问题,mse 是越大两个图越不相似。而这里的 similarity_mse1-mse )、以及真正的 ssim 是数值越大,两个图越相似。

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那就叫max_mse_loss?loss会比较容易让人知道这个值越小越好

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这两个参数改为 conv_mse_threshold , linear_mse_threshold , 直接不用 ssim 代表可行吗
image

Copy link
Contributor

@hjchen2 hjchen2 Jan 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我觉得可以,mse就是越小越相似,默认值改成0.1

@hjchen2 hjchen2 merged commit d472b13 into main Jan 16, 2024
1 of 4 checks passed
@hjchen2 hjchen2 deleted the dev_quant_tools branch January 16, 2024 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants