Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: Torch not compiled with CUDA enabled #25

Open
DuckersMcQuack opened this issue Nov 4, 2023 · 0 comments
Open

AssertionError: Torch not compiled with CUDA enabled #25

DuckersMcQuack opened this issue Nov 4, 2023 · 0 comments

Comments

@DuckersMcQuack
Copy link

Tried to install fresh in a conda venv

Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Loading model from cache file: C:\Users\Duckers.cache\huggingface\hub\models--MVDream--MVDream\snapshots\d14ac9d78c48c266005729f2d5633f6c265da467\sd-v2.1-base-4view.pt
Traceback (most recent call last):
File "H:\Stable3D\MVDream-main\scripts\t2i.py", line 79, in
model.to(device)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 989, in to
return self._apply(convert)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 664, in apply
param_applied = fn(param)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\cuda_init
.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

(NVDream) H:\Stable3D\MVDream-main>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant