Add a new sampler named Kohaku_LoNyu_Yog. Recommended number of steps: 10 steps. Since it is a second-order method, it is slower than other methods.
Add a new sampler named Kohaku_LoNyu_Yog. Recommended number of steps: 10 steps. Since it is a second-order method, it is slower than other methods.
Principle: Please refer to the following two images. Since three-dimensional space is a subspace of high-dimensional space, all operations in three-dimensional space must be feasible in high-dimensional space. Therefore, I have used some geometric tricks (as shown in Figure 1), where it is assumed that the tensor and the target image can be simplified to a moving point. All statements in this section refer to the simplified form of the tensor in three-dimensional space.
First, we find -x, calculate the gradient d and d2, and then from Figure 2, we can easily deduce geometrically that (d+d2)/2 must be a vector pointing downward toward the visual direction. Therefore, x+(d+d2)/2 must represent a point that is closer to the target region A. So, by performing denoising here, we can obtain the velocity vector d3. As shown in the figure, (d+d3)/2 is closer to the true target region.
In the last few steps of sampling, you will find that this method deviates from the true region. You can verify this by plotting the image. So we only execute this method for half of the steps.
You may question that if the trajectory of x is a concave function, this theory does not hold at all. This is completely correct. However, this sampling method is always effective, which sufficiently proves that the projection of x onto two or three dimensions must be a convex function.
This sampler does not have a significant improvement in quality and speed (the quality may be slightly improved), but I believe it proves many things, such as the possibility of using geometric methods for analysis in three-dimensional space.
I am currently goofing off at the company, so test cases and plugins will be submitted later.
新增一个名为Kohaku_LoNyu_Yog的采样器。推荐步数:10步。由于是二阶方法,速度比其他方法更慢。
原理:请看以下两张图片。由于三维空间是高维空间的一个子空间,三维空间中所有操作在高维空间中一定可行。所以我使用了一些几何技巧(如图1),这里假定tensor和目标图像可以被简化成一个动点。此段中所有表述皆指代tensor在三维空间中的简化形式。
首先求-x,求梯度d和d2,然后看图2,我们可以用几何学轻易推断出(d+d2)/2
一定是一个指向视觉方向的下的向量。所以,x+(d+d2)/2
一定代表一个位置更加靠近目标区域A的点。所以,在这里执行去噪可得到速度向量d3.如图所示,(d+d3)/2
是更加贴近真实目标区域的。
在采样的最后几步,您会发现此法反而偏离了真实区域。您可以通过绘制图像验证这一点。所以我们只在一半的步骤上执行此法。
您可能会质疑,假定x的轨迹是一个凹函数,这套理论就完全不符合了。这完全正确。但本采样方法总是有效,充分说明x在二维/三维上的投影一定是一个凸函数。
本采样器在质量和速度上都没有明显提升(质量上可能有一些略微提高),但我认为它证明了许多东西,例如可以在三维空间采用几何方法进行分析。
我正在公司摸鱼写这个,所以测试样例和插件会在之后提交。
新增两个采样器,Euler Negative 和 Euler dy Negtive。我不会说它们效果比别的好,因为没有理论依据。不过在实践中我很喜欢它们。
在SDXL表现更好一些,但在SD1.5使用效果也不差
我得去稍微进修一下关于AI的知识,目前这种纯粹依靠灵感和实践的方案过于自由。
以下是关于它们的测试:
Two new samplers have been added, Euler Negative and Euler dy Negative. I won't claim they perform better than others because there's no theoretical basis for it. However, in practice, I quite like them.
They perform slightly better in SDXL, but their performance in SD1.5 is also decent.
I need to brush up on my professional knowledge of AI. Currently, relying solely on intuition and practice feels too unrestricted.
Below are the test results for them:
832x1216,model kohaku-xl-epsilon
Please Note
目前插件有一些小bug,会使得人物在画面中的占比变小,就像这样:
The current plugin has a few minor bugs that cause the characters to shrink in the frame, like this:
所以你可以采用其他方案,例如修改源码的方案去添加这两个采样器,请参考:How to use
So, you can consider alternative solutions, such as modifying the source code to add these two samplers. Please refer to:How to use
简单分析Dy Step的原理
最近我尝试在https://civitai.com/models/399873/kohaku-xl-epsilon上测试了Euler Dy,效果不尽人意(但在ang3以及pony系列效果不错)。因此我咨询了作者,得到的回复是“模型没有使用任何低分辨率的图片进行训练”。我想这就是原因所在,Euler Dy将图片放在一个小的尺度上,让去噪工作来到模型的舒适区,并给予一个参考。尤其在SD1.5,Euler Dy确保图像始终处于模型的舒适区域。
而在本次的SDXL模型中,它几乎遗忘了如何在小尺度上生成图片。所以,Dy Step的改进方向已经变得很明显:寻找到SDXL模型的舒适区域,并让采样器在SDXL的舒适区工作。
同时我也写了几个其他的采样器,效果平平,达不到Dy Step的效果。如果有人想试试它们,请在评论区留言。
Recently, I attempted to analyze the principle of Dy Step on https://civitai.com/models/399873/kohaku-xl-epsilon using Euler Dy. The results were unsatisfactory (although they performed well on ang3 and pony series). Therefore, I consulted the author and received the response that "the model did not use any low-resolution images for training." I believe this is the reason why. Euler Dy places images on a small scale, allowing denoising to operate within the model's comfort zone and providing a reference. Especially in SD1.5, Euler Dy ensures that the image always remains within the model's comfort zone.
However, in the current SDXL model, it has almost forgotten how to generate images on a small scale. Therefore, the direction for improving Dy Step has become apparent: to find the comfort zone of the SDXL model and enable the sampler to work within the comfort zone of SDXL.
I've also written a few other samplers, but their performance is mediocre and doesn't match up to Dy Step's effectiveness. If anyone wants to try them out, please leave a comment in the discussions.
阶段性技术报告报告。
这些日子里,我尝试了超过二十种策略,但采样器的质量总是优于euler a却差于euler dy,所以暂时还不能发布euler dy a。我必须承认这和nai3的dyn是不同的东西。我依旧会长期维护这个项目,并为了新的采样方法努力,同时尽可能降低ai的算力需求。
Stage Technical Report
In these days, I have attempted over twenty strategies, but the quality of the sampler is always better than Euler A yet worse than Euler DY. Therefore, I cannot release Euler DY A for the time being. I must acknowledge that this is different from NAI3's DYN. I will continue to maintain this project in the long term and work towards developing new sampling methods while trying to minimize the AI's computing power requirements as much as possible.
Makes dy_step respect original channel count, making it compatible with Stable Cascade models.
使dy_step遵循原本的通道数,使其与Stable Cascade模型相适应。
Change code for ComfyUI import. This will fix the overwrite error that occurs in ComfyUI when other extensions use scripts
as the import folder (I really hope ComfyUI will standardize its interfaces and version dependencies).
P.S.You may find some commits with no means, that because I am not familiar with Github, and try times. So don't care.
更改代码,用于ComfyUI导入。这将修复在ComfyUI中存在其他插件时,若其他插件将scripts
作为导入文件夹时引起的覆盖错误。(真希望ComfyUI能规范一下它的接口和版本依赖)。
P.S.你可能会发现一些无意义的提交,这是因为我不熟悉Github的使用,并且尝试了几次。别在意。
Thanks for @pamparamm, his selfless work has been a great help.
Now this sampler can be use as a extension for ComfyUI and WebUI from Automatic1111.
The inpainting bug will be fixed.(At least doesn't throw any exceptions.)
Thanks again.
Another extension from @licyk , in repo: https://github.com/licyk/advanced_euler_sampler_extension suitable for 1.8 version
It's also useful, and thanks hard efforts from licky, too.
In the future, I will work on making dy step compatible with more samplers (such as the DPM series).
感谢 @pamparamm,他的无私工作帮助很大。
现在,这个采样器可以作为 ComfyUI 和 Automatic1111 的 WebUI 的扩展来使用。
修复了inpainting的bug。(至少不再抛出异常。)
再次感谢。
另一个拓展来自@licyk,位于: https://github.com/licyk/advanced_euler_sampler_extension 适用于1.8
也同样很好用, 同样感谢licyk的辛勤努力。
之后我会想办法让dy step适配更多采样器(例如dpm系列)。
Find a way to avoid errors during inpaint and extensions.
Please note that this is just a temporary solution and doesn't actually resolve the issue.It will try to use Euler method if error occurs.
P.S.I trying to fix it……but all methods seems doesn't work.I've working for it over 36 hours.
Suggestions from anyone are welcome.
I need to take a short break and prepare for my other project.<== A mobile phone app base on flutter, using for TRPG.(No worries, I don't mean I will give up this project,also not about diverting traffic either. LOL.)
想了个办法避免在局部重绘以及拓展中的报错。
请注意,这只是一个临时解决方案,实际上并没有解决问题。如果出现错误,它将尝试使用欧拉方法。
我努力尝试修复……但所有方案都不起作用。我已经连续工作了36小时以上。欢迎任何人提出建议。 , 我需要稍微休息一下,并且为我的其他项目做准备。<== 一个基于Flutter的手机应用,用于TRPG。(别担心,我不是说我要放弃这个项目,也不是引流。)
Add __init__.py
for ComfyUI. Thanks for CapsAdmin. I don't use ComfyUI so I can't tell you how to add it, sorry.
为ComfyUI增加__init__.py
感谢CapsAdmin 我不用ComfyUI所以我没法告诉你怎么添加它,抱歉
A sampling method based on Euler's approach, designed to generate superior imagery.
The SMEA sampler can significantly mitigate the structural and limb collapse that occurs when generating large images, and to a great extent, it can produce superior hand depictions (not perfect, but better than existing sampling methods).
The SMEA sampler is designed to accommodate the majority of image sizes, with particularly outstanding performance on larger images. It also supports the generation of images in unconventional sizes that lack sufficient training data (for example, running 512x512 in SDXL, 823x1216 in SD1.5, as well as 640x960, etc.).
The SMEA sampler performs very well in SD1.5, but the effects are not as pronounced in SDXL.
In terms of computational resource consumption, the Euler dy is approximately equivalent to the Euler a, while the Euler SMEA Dy sampler will consume more computational resources, approximately 1.25 times more.
一种基于Euler的采样方法,旨在生成更好的图片
Dyn采样器可以很大程度上避免出大图时的结构、肢体崩坏,能很大程度得到更优秀的手部(不完美但比已有采样方法更好)
Smea采样器理论上将增加图片的细节(无法达到Nai3让图片闪闪发光的效果)
适配绝大多数图片尺寸,在大图的效果尤其优秀,支持缺乏训练的异种尺寸(例如在sdxl跑512x512,在sd1.5跑823x1216,以及640x960等)
在SD1.5效果很好,在SDXL效果不明显。
计算资源消耗:Euler dy将约等于euler a, 而euler smea dy将消耗更多计算资源(约1.25倍)
SD1.5,测试模型AnythingV5-Prt-RE,测试姿势Heart Hand,一个容易出坏手的姿势
SD1.5: Testing the AnythingV5-Prt-RE model with the Heart Hand pose often results in distorted hand positions.
768x768,without Lora: 768x768,with Lora: 832x1216,without lora: 832x1216,with Lora:
SDXL,测试模型animagineXLV31,测试姿势也是手部姿势
SDXL: Testing animagineXLV31 model with hand poses.
step.1: 打开sd-webui-aki-v4.6\repositories\k-diffusion\k_diffusion
文件夹,打开其中的sampling.py
文件(可以用记事本打开,称为文件1)
Step 1: Navigate to the k_diffusion
folder within the sd-webui-aki-v4.6\repositories\k-diffusion
directory and open the sampling.py
file within it (this can be done using a text editor like Notepad, which will be referred to as File 1).
step.2: 复制本仓库中的sampling.py
中的所有内容并粘贴到文件1末尾
Step 2: Copy the entire content from the sampling.py
file in the current repository and paste it at the end of File 1.
(To present the complete picture, I have utilized PyTorch's abbreviation feature.)
Step 3: Open the sd_samplers_kdiffusion.py
file located in the sd-webui-aki-v4.6\modules
directory (refer to this as File 2).
Step 4: Copy the following two lines from this repository:
Step 5: Restart the webui, and you will see:
现在你就可以使用它们了。在图生图中可能有一些bug,欢迎向我汇报(请带上截图/报错声明)
Now you can start using them. There may be some bugs in the image generation process, and I welcome you to report any issues to me (please provide screenshots or error statements).
简单地讲,dyn方法有规律地取出图片中的一部分,去噪后加回原图。在理论上这应当等同于euler a,但其加噪环节被替代为有引导的噪声。
而smea方法将图片潜空间放大再压缩回原本的大小,这增加了图片的可能性。很抱歉我没能实现Nai3中smea让图片微微发光的效果。
一点忠告:不要相信pytorch的插值放大和缩小方法,不会对改善图像带来任何帮助。同时用有条件引导取代随机噪声也是有希望的道路。
In simple terms, the dyn method regularly extracts a portion of the image, denoises it, and then adds it back to the original image. Theoretically, this should be equivalent to the Euler A method, but its noise addition step is replaced with guided noise.
The SMEA method enlarges the image's latent space and then compresses it back to its original dimensions, thereby increasing the range of possible image variations. I apologize that I was unable to achieve the subtle glowing effect in Nai3 with the SMEA method.
A piece of advice: Do not trust PyTorch's interpolation methods for enlarging and shrinking images; they will not contribute to improving image quality. Additionally, replacing random noise with conditional guidance is also a promising path forward.
Email:872324454@qq.com
Bilibili:星河主炮发射