-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue where model in merged but not save leads to "Cannot copy out of meta tensor; no data!" - persists through restart - had to disable app to continue #1
Comments
when I come across a similar fatal error. "Extensions tab" -> "Apply and restart UI" will help for some cases.
this is a known issue. from time to time similar issues arise. I guess It is caused by, non existed checkpoint (named "Fake checkpoint") + checkpoint cache.
thank you for your reporting, I will check it soon! |
|
Please reopen it or open a new issue if you encounter similar problems. |
Re Error: "NotImplementedError: Cannot copy out of meta tensor; no data!" Is this something that could be corrected on the A1111 side in the future? I've so far avoided turning off model caching as it's a great speed boosting option if you have the resources (I have 64GB Ram + 24gb VRam). I'll look into the first option as I'm not sure what that entails. That said I've still been getting this error almost everytime I use model mixer now and then try to do A1111 functions w/o it. I'm sure there are others who likely get stuck w/ this issue also who likely don't visit github to find out why models aren't loading. Wish there was a clear fix that doesn't require disabling disabling caching. A flushing of the model cache when the extension is untoggled wouldn't be a potential workaround? Thx again for your time. (Another copy/paste of a different console log from just before posting this comment):
|
|
Sadly I've been trying this for a week and it doesn't fix the problem. At the moment if I want to use Model Mixer I have to plan on a necessary restart after using it as there is no way to change the model after. I'm not sure if maybe it's something else I have installed causing this, but I hope sometime down the road a future version will work. Gave it another shot and here is my log. I did a merge then tried changing models. Tried using the setting menu's unload checkpoint button.....then tried running Model mixer itself again. Everything errors until restart.
|
Think I fixed this with another issue. Basically I stopped webui from extracting data from one model onto the next.
This will empty the model container and they I deleted
from the same file too. It probably is coded in a very similar way for SDXL. You might only need to alter the second bit, maybe. |
Wow, thanks! I'm not a coder, but I'll give it a try will use ChatGPT4 for backup. Appreciate you commenting!! |
So this is where you're stuck at, now? |
I'm glad you got it working then. Have fun. |
This solution worked for me; thanks so much! |
AUTOMATIC1111/stable-diffusion-webui#13582 there are two issues exist. one is the issue already mentioned in this thread. diff --git a/modules/sd_models.py b/modules/sd_models.py
index 0f1fb265..e466ef95 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -758,12 +758,13 @@ def reuse_model_from_already_loaded(sd_model, checkpoint_info, timer):
send_model_to_trash(loaded_model)
timer.record("send model to trash")
- if shared.opts.sd_checkpoints_keep_in_cpu:
- send_model_to_cpu(sd_model)
- timer.record("send model to cpu")
+ if sd_model and shared.opts.sd_checkpoints_keep_in_cpu:
+ send_model_to_cpu(sd_model)
+ timer.record("send model to cpu")
if already_loaded is not None:
send_model_to_device(already_loaded) and the other issue is that for a reusing model: before using a reusing model, hijack() and apply_unet() calls were missing. diff --git a/modules/sd_models.py b/modules/sd_models.py
index d2ab060e..0f1fb265 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -815,6 +815,8 @@ def reload_model_weights(sd_model=None, info=None):
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
if sd_model is not None and sd_model.sd_checkpoint_info.filename == checkpoint_info.filename:
+ sd_hijack.model_hijack.hijack(sd_model)
+ sd_unet.apply_unet()
return sd_model |
Aha! Excellent work. |
the main cause |
AUTOMATIC1111/stable-diffusion-webui#13582 |
Hey, first off congratulations on your extension. Having the option to merge up 5 models and also to have the option right on the TXT2IMG page is pretty cool.
I don't have time to troubleshoot this right now, but I did want to send in the error I received in case it maybe makes sense to you. I had been merging 5 models and generating images for an hour or so, then left the PC. When I came back and tried to generate another image I got an error of "cannot copy out of meta tensor; no data!".
When I closed A1111 (latest version) and restarted it it seemed to try to reload the last checkpoint which is listed as a huge string txt referencing the temporary merged model you use w/o saving. (btw also noticed the string is huge in PNG info & below the preview). I'm not sure if there are too many characters to handle or if it's just upset it can't find that file perhaps, but it seems like it tries to reload that "file" on restart and is unable. It doesn't release that request though, so even changing models keeps that same error popping up. I restarted multiple times and wasn't able to get an image to generate using any model w/o finally disabling your app.
One thing of note: I do seem to have a persistent recall of my last prompt into the txt input bar when I start A1111. In the past I used to use an extension called state that would restore your exact state the last time you closed A1111 after restated it. I don't have that installed anymore as it stopped working with v1.6. However, it does seem like prompts still get recalled post-reset so perhaps a remnant of that extension is related.
Here's a copy paste of a few starts and error - hope it's insightful:
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.53it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.62it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.68it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.69it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.68it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.69it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.61it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.60it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.43it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.45it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.49it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.71it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████| 2840/2840 [10:11<00:00, 4.65it/s]
Unloading model 4 over the limit of 2: SDXL\2023-09-02 - Topnotch (Good Series) and #25 Gorge n1 supermerged - did lots of images.safetensors [b687629de1]
Unloading model 3 over the limit of 2: SDXL_2023-09-01 - Topnotch Artstyle - #2.5 - During Gorge N1 Stream - 14img - TXT ON - B8 - 1e5-step00001500 + SDXL_2023-08-27 - SDXL-Merge - Topnotch - 3 models (8-25 - 8-24 - 8-26) + SDXL_2023-08-24 - Topnotch Artstyle - 10img-TXT off - 1500 (Cont from 1k) + SDXL_2023-08-31 - Topnotch Artstyle - 12img - TXT on - 20rep - Batch 4 - bucket on -(Good Series) - 2000 steps + SDXL_2023-08-28 - SDXL Merge - 8k Topnotch 20 doubled dif smooth - Use .2 for weight then good.safetensors [6fc4c1bd77]
Reusing loaded model SDXL_2023-09-03 - Supermerge - add dif - 2 + SDXL_2023-08-27 - SDXL-Merge - Topnotch - 3 models (8-25 - 8-24 - 8-26) + SDXL_Topnotch Artstyle 20img-20rep-Txt-On-step00001500 + SDXL_2023-08-31 - Topnotch Artstyle (Mj greed theme park 3 - TXT enc on) - 12img-step00002000 + SDXL_2023-08-31 - Topnotch Artstyle - 12img - TXT on - 20rep - Batch 4 - bucket on -(Good Series) - 2000 steps + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001800.safetensors [d72e289c4d] to load SDXL\2023-09-04 - topnotch artstyle - 20img - TXT ON - B2 - 1e5-step00003200.safetensors
changing setting sd_model_checkpoint to SDXL\2023-09-04 - topnotch artstyle - 20img - TXT ON - B2 - 1e5-step00003200.safetensors: NotImplementedError
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\options.py", line 140, in set
option.onchange()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\initialize_util.py", line 170, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 738, in reload_model_weights
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(whp4o86efmaabuf)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE80617DF0>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE80616950>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805ED270>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE46F2AA40>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
changing setting sd_model_checkpoint to 2023-05-17 - Topnotch (Electronics Test 20 img) - [.50 Normal Flip] - 2500 - epoc.ckpt [3f056ed8bb]: NotImplementedError
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\options.py", line 140, in set
option.onchange()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\initialize_util.py", line 170, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(xo3ghwul7h6e9gt)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DDA1185C90>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DDA1184D00>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DDA1184970>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F2440>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(zj8ljzmohy43u64)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE805F3F40>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36B98550>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36B98700>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36BA4340>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(2ps0463ll0ovgkt)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE411E6440>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F3F70>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F2350>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE8060A440>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
The text was updated successfully, but these errors were encountered: