Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: After check animateddiff, txt2img call infv2v and led to group_norm error #17

Open
2 tasks done
jinghuacao opened this issue Jun 24, 2024 · 1 comment
Open
2 tasks done

Comments

@jinghuacao
Copy link

jinghuacao commented Jun 24, 2024

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

As soon as I activate animatediff in txt2img, or img2img in webui-forge, and click generate, the exception will be constantly triggered

Steps to reproduce the problem

  1. install stable diffusion webui forge or stabke-diffusion-webui-forge-gn-patcher-for-early-add
  2. leave animatediff disabled, generate an image with LORA
  3. .enable animatediff, motion module == mm_sdxl_hs.safetensors, save format = gif/png, number of frames = 64, FPS = 8, Display loop number = 0, closed loop = R-P, context batch size = 16, stride = 1, overlap = -1, Frame Interpolation = Off, interp X = 10, Empty Video source, empty video path, empty mask path,
  4. click generate in txt2img tab
  5. txt2img triggers exception in the 1st run,
  6. call_queue print out txt2img arguments,

*** Error completing request
*** Arguments: ('task(imz4twf5rb7i0ur)', <gradio.routes.Request object at 0x000002033741D000>, 'score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_anime,\nayakakamisato,,\nayaka kamisato, blunt bangs, blue eyes, blue hair, hair ornament, hair ribbon, hair tubes, long hair, ponytail, tress ribbon, smile,\noutdoors, onsen, frost, frozen pool, ice, winter\nlooking at viewer, dutch angle, lora:genshin-ayaka-kamisato-ingame-ponyxl-lora-nochekaiser:1', 'monochrome, simple background, 3d, watermark, text, dialogue', [], 25, 'DPM++ 2M SDE Karras', 1, 1, 7, 576, 448, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', ['Eta: 0.0'], 0, False, '', 0.8, 948275092, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000203374BBD60>, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable

call_queue exception is misleading, it is NoneType because txt2img did not complete. So it has no return value for the list. check "console log" call stack details where it broke down

What should have happened?

txt2img should behave as animatediff was disabled, generate image without triggering exception every time

Commit where the problem happens

webui: both webui-forge and webui-forge-gn-patcher
https://github.com/lllyasviel/stable-diffusion-webui-forge/tree/conrevo/gn-patcher-for-early-ad

extension:
sd-forge-animatediff-forge-master

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

command line arguments:

set PYTHON=
set GIT=
set A1111_HOME=h:/AI/Stable-Diffusion
set VENV_DIR=%A1111_HOME%/venv
set COMMANDLINE_ARGS=%COMMANDLINE_ARGS% --listen --share --api --opt-sub-quad-attention --disable-nan-check --xformers --precision full --no-half ^
--ckpt-dir %A1111_HOME%/models/Stable-diffusion ^
--hypernetwork-dir %A1111_HOME%/models/hypernetworks ^
--embeddings-dir %A1111_HOME%/embeddings ^
--lora-dir %A1111_HOME%/models/Lora

call ..\Stable-Diffusion\venv\scripts\activate

call webui.bat

Console logs

Moving model(s) has taken 53.51 seconds
0%| | 0/25 [00:06<?, ?it/s]
Traceback (most recent call last):
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules_forge\main_thread.py", line 37, in loop
task.work()
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\txt2img.py", line 111, in txt2img_function
processed = processing.process_images(p)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\processing.py", line 1275, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\sd_samplers_kdiffusion.py", line 251, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\sd_samplers_common.py", line 263, in launch_sampling
return func()
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\sd_samplers_kdiffusion.py", line 251, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules_forge\forge_sampler.py", line 88, in forge_sample
denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\ldm_patched\modules\samplers.py", line 289, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond
, x, timestep, model_options)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\ldm_patched\modules\samplers.py", line 256, in calc_cond_uncond_batch
output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep
, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\extensions\sd-forge-animatediff-forge-master\scripts\animatediff_infv2v.py", line 146, in mm_sd_forward
out = apply_model(info["input"][_context], info["timestep"][_context], **info_c)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\ldm_patched\modules\model_base.py", line 90, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 884, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 64, in forward_timestep_embed
x = modifier(x, 'after', layer, layer_index, ts, transformer_options)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\extensions\sd-forge-animatediff-forge-master\scripts\animatediff_mm.py", line 83, in mm_block_modifier
return self.mm.down_blocks[mm_idx0].motion_modulesmm_idx1
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\extensions\sd-forge-animatediff-forge-master\motion_module.py", line 127, in forward
return self.temporal_transformer(x)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\extensions\sd-forge-animatediff-forge-master\motion_module.py", line 179, in forward
hidden_states = self.norm(hidden_states).type(hidden_states.dtype)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\ldm_patched\modules\ops.py", line 146, in forward
return super().forward(*args, **kwargs)
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\modules\normalization.py", line 279, in forward
return F.group_norm(
File "h:\AI\Stable-Diffusion\venv\lib\site-packages\torch\nn\functional.py", line 2558, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float
mixed dtype (CPU): expect parameter to have scalar type of Float
*** Error completing request
*** Arguments: ('task(ekj26imlms0tjc5)', <gradio.routes.Request object at 0x000002520F39EFE0>, 'score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_anime,\nayakakamisato,,\nayaka kamisato, blunt bangs, blue eyes, blue hair, hair ornament, hair ribbon, hair tubes, long hair, ponytail, tress ribbon, smile,\noutdoors, onsen, frost, frozen pool, ice, winter\nlooking at viewer, dutch angle, lora:genshin-ayaka-kamisato-ingame-ponyxl-lora-nochekaiser:1', 'monochrome, simple background, 3d, watermark, text, dialogue', [], 25, 'DPM++ 2M SDE Karras', 1, 1, 7, 576, 448, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', ['Eta: 0.0'], 0, False, '', 0.8, 948275092, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000002520FC830D0>, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "H:\AI\stable-diffusion-webui-forge-conrevo-gn-patcher\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable

Additional information

No response

@jinghuacao
Copy link
Author

this is how I test it,
when I add all-in-fp32, it worked,
if I change to all-in-fp16, the model runs, but randomly stuck in one frame and just froze,
then I tried a few options to turn on fp32 individually,
vae-in-fp32 does not work, same error
clip-in-fp32 does not work, same error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant