You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Loading model from cache file: C:\Users\Duckers.cache\huggingface\hub\models--MVDream--MVDream\snapshots\d14ac9d78c48c266005729f2d5633f6c265da467\sd-v2.1-base-4view.pt
Traceback (most recent call last):
File "H:\Stable3D\MVDream-main\scripts\t2i.py", line 79, in
model.to(device)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 989, in to
return self._apply(convert)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 664, in apply
param_applied = fn(param)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\cuda_init.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
(NVDream) H:\Stable3D\MVDream-main>
The text was updated successfully, but these errors were encountered:
Tried to install fresh in a conda venv
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Loading model from cache file: C:\Users\Duckers.cache\huggingface\hub\models--MVDream--MVDream\snapshots\d14ac9d78c48c266005729f2d5633f6c265da467\sd-v2.1-base-4view.pt
Traceback (most recent call last):
File "H:\Stable3D\MVDream-main\scripts\t2i.py", line 79, in
model.to(device)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 989, in to
return self._apply(convert)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 664, in apply
param_applied = fn(param)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\Duckers\anaconda3\envs\NVDream\lib\site-packages\torch\cuda_init.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
(NVDream) H:\Stable3D\MVDream-main>
The text was updated successfully, but these errors were encountered: