-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new sampler DDIM CFG++ #16035
Add new sampler DDIM CFG++ #16035
Conversation
Error during generation. Also i see that controlnet is mentioned in the error, although it was not even used during generation.
Maybe you should change line 51 in the modules/sd_samplers_timesteps_impl.py file from What do you think? |
""" | ||
alphas_cumprod = model.inner_model.inner_model.alphas_cumprod | ||
alphas = alphas_cumprod[timesteps] | ||
alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(float64(x)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change to alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' and x.device.type != 'xpu' else torch.float32)
to prevent NameError: name 'float64' is not defined
The master branch is missing the float64 method from modules/torch_utils.py. I'd recommend testing on the dev branch for now. Relevant commit: 9c8075b |
It's possible this is something extensible to many of the existing samplers. Samples generated from my |
Implementation of float64 in torch_utils added in #15815 does not work as intended. This function will always return torch.float64, even on mps and xpu. I just posted #16058 with proper implementation. So, if someone getting errors on Mac while testing this, apply my fix for torch_utils first. |
Can you point this fork into the dev branch? Or it is possible to make it like a schedule parameter? To be used like this EDIT: Did a semi implementation here Panchovix@f8dfe20 |
…art 1 Re-applied AUTOMATIC1111/stable-diffusion-webui#15333 into Forge, to use any scheduler with any sampler. Also implemented CFG++ from AUTOMATIC1111/stable-diffusion-webui#16035 as a scheduler instead of a sampler. Ported SD Turbo and Variance Preserving from https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/main/ldm_patched/contrib/external_custom_sampler.py
Adds CFG++ Scheduler, making a mirror implementation of how CFG++ Sampler is implemented on AUTOMATIC1111#16035 Modified the code to make it work with k_diffusion samplers and not break compatibility with other schedulers.
generate bad result |
i can't find dpm++ 2m cfg++ |
in t2i, it doesn't seem better |
why you merge it? it cannot make sdxl better |
DDIM CFG++ works well with pony model on my env. i consider that the quality of the fast DDIM has improved a bit |
Can you maybe do the same for the other samplers sometime? on webui/forge we still only have ddim_cfgpp. I tried using the same implementation for euler_a_cfgpp but I get burned results and can't really tell if it's working. On forge you could also use the same function that's used in comfy |
Description
This PR implements a new sampler "DDIM CFG++" derived from CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models (Chung et al., 2024).
The new sampler is modified from DDIM, with the main change being we use the unconditional noise to guide the denoising instead of the conditional noise.
Major changes:
Screenshots/videos:
Prompt: "A photo of a silver 1998 Toyota Camry."
Prompt: "A hyperrealistic portrait close-up photo of a smug man lighting a cigarette in his mouth in the city light at night in the rain with an explosion behind him."
Additional Links:
Official project page: https://cfgpp-diffusion.github.io/
Official code repository: https://github.com/CFGpp-diffusion/CFGpp
ArXiv: https://arxiv.org/abs/2406.08070
Checklist: