Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

indices should be either on cpu or on the same device as the indexed tensor (cpu) #32

Open
sneccc opened this issue Nov 24, 2022 · 3 comments

Comments

@sneccc
Copy link

sneccc commented Nov 24, 2022

Every extension works fine except this one, its just the default webui code i didnt change anything

/notebooks/stable-diffusion-webui
Patching transformers to fix kwargs errors.
Dreambooth API layer loaded
Aesthetic Image Scorer: Unable to load Windows tagging script from tools directory
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [81761151] from /notebooks/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
Global Step: 840000
Applying xformers cross attention optimization.
Model loaded.
no display name and no $DISPLAY environment variable
Loaded a total of 6 textual inversion embeddings.
Embeddings: bad_prompt_version2, gadget, testInversion, bad_prompt, testest-neg, testest
Running on local URL:  http://127.0.0.1:7860/
Running on public URL: $$$$$$$

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Loading weights [e02601f3] from /notebooks/stable-diffusion-webui/models/Stable-diffusion/sd_v1-5_vae.ckpt
Applying xformers cross attention optimization.
Weights loaded.
Training at rate of 0.005 until step 8000
Preparing dataset...
100%|█████████████████████████████████████████████| 5/5 [00:01<00:00,  4.97it/s]
  0%|                                                  | 0/8000 [00:00<?, ?it/s]
Applying xformers cross attention optimization.
Error completing request
Arguments: ('testest', '0.005', 1, '/notebooks/kohya_ss/train_data/conceptart/', 'dream_artist', 512, 512, 8000, 500, 500, '/notebooks/stable-diffusion-webui/textual_inversion_templates/style.txt', True, False, '', '', 20, 0, 7, -1.0, 512, 512, 3.0, '', True, True, 1, 1, 1.0, 25.0, 1.0, 25.0, 0.9, 0.999, False, 1) {}
Traceback (most recent call last):
  File "/notebooks/stable-diffusion-webui/modules/ui.py", line 185, in f
    res = list(func(*args, **kwargs))
  File "/notebooks/stable-diffusion-webui/webui.py", line 57, in f
    res = func(*args, **kwargs)
  File "/notebooks/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/ui.py", line 30, in train_embedding
    embedding, filename = dream_artist.cptuning.train_embedding(*args)
  File "/notebooks/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 440, in train_embedding
    output = shared.sd_model(x, c_in, scale=cfg_scale)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/notebooks/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 879, in forward
    return self.p_losses(x, c, t, *args, **kwargs)
  File "/notebooks/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 287, in p_losses_hook
    logvar_t = self.logvar[t_raw].to(self.device)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
@andykaufseo
Copy link

go to /notebooks/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 879 and change the existing code to : logvar_t = self.logvar[t.cpu()].to(self.device)

@datacurse
Copy link

go to /notebooks/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 879 and change the existing code to : logvar_t = self.logvar[t.cpu()].to(self.device)

Thank you, that helped! Tho i replaced line 305 of cptuning.py

@HRTK92
Copy link

HRTK92 commented Dec 21, 2022

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants