You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "C:\Users\User\nuwave2\inference.py", line 71, in
model = NuWave2(hparams).to(args.device)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda_init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Also, I don't know what values to place for the placeholder arguments {--steps:option} {--gt:option}.
The text was updated successfully, but these errors were encountered:
When running
inference.py
, an error occurs:File "C:\Users\User\nuwave2\inference.py", line 71, in
model = NuWave2(hparams).to(args.device)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda_init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Also, I don't know what values to place for the placeholder arguments
{--steps:option} {--gt:option}
.The text was updated successfully, but these errors were encountered: