Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent failed : ERROR:sd:[AgentScheduler] Task task(ifxb3ta363satpk) failed: #751

Open
Akossimon opened this issue Nov 13, 2024 · 0 comments

Comments

@Akossimon
Copy link

i was wondering if this can be fixed, i had some of those in a row today, but then it worked again, then it shows this again and again ..... here the terminal text, if that would help to figure out waht is causing this, and if it could be fixed or not :


ERROR:sd:[AgentScheduler] Task task(ifxb3ta363satpk) failed:
64%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 46/72 [03:18<01:54, 4.41s/it]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 72/72 [05:11<00:00, 4.33s/it]
SwinIR: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 35/35 [00:13<00:00, 2.67it/s]
*** Error completing request
*** Arguments: ('task(a88jqnq1ltdvyhw)', <agent_scheduler.task_runner.FakeRequest object at 0x38e5bf370>, "(She is wearing a Dior haute couture evening gown)....................... ,\n", 'Blurry, blurry, blurry, (wide-angle lens), (wide angle lens distortion), ng_deepnegative_v1_75t,(worst quality, low quality:1.4), (HUge arms), (big arms), ...............extra legs ,', [], 1, 1, 2, 1280, 856, True, 0.17, 1.5, '8x_NMKDFacesExtended_100000_G', 32, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', ['Clip skip: 2', 'Model hash: checkpoints_on_NVMH_drive symlink/jibMixRealisticXL_v160Aphrodite.safetensors [d6c411bd1d]', 'VAE: None'], 0, 72, 'DPM++ 3M SDE', 'Karras', False, '', 0.8, 2038529722, False, -1, 0, 0, 0, True, False, {'ad_model': 'face_yolov9c.pt', 'ad_model_classes': '', 'ad_tab_enable': False, 'ad_prompt': 'a (30 y. o.) woman , (beautiful teeth), (a warm glow sunset perfect sunset lighting), masterpiece, (best quality), (incredible detail),(incredible details), (intricate high details), (sharp focus), (perfect lighting), dslr, 32k, (high quality), (extra detail), (extra sharp), (natural skin), pores, lora:aldaralora_jibmixrealisticxl_v160aphrodite_5000_lora_f32--exp:1, \n', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.8, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'face_yolov9c.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '(sharp details), (high quality),(high resolution), (high details), (32k resolution), lora:aldaralora_jibmixrealisticxl_v160aphrodite_4000_lora_f32--exp:1, (sharp details), (high quality),(high resolution), (high details), (32k resolution), ', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.95, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, {'ad_model': 'full_eyes_detect_v1.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '(sharp details), (high quality),(high resolution), (high details), (32k resolution), lora:aldaralora_jibmixrealisticxl_v160aphrodite_4000_lora_f32--exp:1, (sharp details), (high quality),(high resolution), (high details), (32k resolution), ', 'ad_negative_prompt': '', 'ad_confidence': 0.44, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, {'ad_model': 'lips_v1.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '(sharp details), (high quality),(high resolution), (high details), (32k resolution), lora:aldaralora_jibmixrealisticxl_v160aphrodite_4000_lora_f32--exp:1, (sharp details), (high quality),(high resolution), (high details), (32k resolution), ', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, {'ad_model': 'hand_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': 'masterpiece, hand, elegant hand, detailed hand, high quality hand, best quality, ', 'ad_negative_prompt': 'extra fingers, old hand, wrinkled skin, (drawn, furry, illustration, cartoon, anime, comic:1.3), 3d, cgi, (drawn, furry, illustration, cartoon, anime, comic:1.3), deformed, low quality, bad quality, worst quality, mutation, mutated, (deformed),----- bad anatomy, bad hand, extra hands, extra fingers, too many fingers, fused fingers, liquid hand, inverted hand, easynegative, sketch, duplicate, ugly, (bad and mutated hands:1.3), (blurry:2.0), horror, geometry, bad_prompt, (bad hands), (missing fingers), (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), ((2girl)), (deformed fingers:1.2), (long fingers:1.2), bad hand,', 'ad_confidence': 0.3, 'ad_mask_filter_method': 'Area', 'ad_mask_k': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', False, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, True, 0.85, 0.6, 4, False, False, 512, 64, True, True, True, False, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x3a08905e0>, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), False, True, 2, 3, 0.05, 0.1, 'bicubic', 0.5, 2, False, True, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CPU', False, 0, 'None', '', None, False, False, 0.5, 0, 'tab_single', False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, '', '', '(CatGirl warrior:1.2), legendary sword,', '', '', '', 'Default', 1.0, 1, False, False, 'Default', '', '', None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {}
Traceback (most recent call last):
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/processing.py", line 847, in process_images
res = process_images_inner(p)
File "/Users/akos/pinokio/api/automatic1111.git/app/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/processing.py", line 988, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/processing.py", line 1362, in sample
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/processing.py", line 1421, in sample_hr_pass
samples = images_tensor_to_samples(decoded_samples, approximation_indexes.get(opts.sd_vae_encode_method))
File "/Users/akos/pinokio/api/automatic1111.git/app/modules/sd_samplers_common.py", line 110, in images_tensor_to_samples
x_latent = model.get_first_stage_encoding(model.encode_first_stage(image))
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/repositories/generative-models/sgm/models/diffusion.py", line 127, in encode_first_stage
z = self.first_stage_model.encode(x)
File "/Users/akos/pinokio/api/automatic1111.git/app/repositories/generative-models/sgm/models/autoencoder.py", line 321, in encode
return super().encode(x).sample()
File "/Users/akos/pinokio/api/automatic1111.git/app/repositories/generative-models/sgm/models/autoencoder.py", line 308, in encode
h = self.encoder(x)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/repositories/generative-models/sgm/modules/diffusionmodules/model.py", line 579, in forward
h = self.down[i_level].block[i_block](hs[-1], temb)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/repositories/generative-models/sgm/modules/diffusionmodules/model.py", line 137, in forward
h = self.norm2(h)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/akos/pinokio/api/automatic1111.git/app/extensions-builtin/Lora/networks.py", line 614, in network_GroupNorm_forward
return originals.GroupNorm_forward(self, input)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 279, in forward
return F.group_norm(
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2558, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/_refs/init.py", line 3032, in native_group_norm
out, mean, rstd = _normalize(input_reshaped, reduction_dims, eps)
File "/Users/akos/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/torch/_refs/init.py", line 2994, in _normalize
out = (a - mean) * rstd
RuntimeError: MPS backend out of memory (MPS allocated: 17.89 GB, other allocations: 17.26 GB, max allowed: 36.27 GB). Tried to allocate 1.18 GB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant