Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error #179

Open
LolekLiam opened this issue Oct 10, 2021 · 2 comments
Open

error #179

LolekLiam opened this issue Oct 10, 2021 · 2 comments

Comments

@LolekLiam
Copy link

here is error:
C:\Users\Liam\OneDrive - Osnovna šola Dolenjske Toplice\Desktop\deep-daze-0.10.2>imagine "hell"
Setting jit to False because torch version is not 1.7.1.
c:\python39\lib\site-packages\torch\cuda\amp\grad_scaler.py:115: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn("torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.")
Starting up...
Imagining "hell" from the depths of my weights...
epochs: 0%| | 0/20 [00:00<?, ?it/s]c:\python39\lib\site-packages\torch\cuda\amp\autocast_mode.py:120: UserWarning: torch.cuda.amp.autocast only affects CUDA ops, but CUDA is not available. Disabling.
warnings.warn("torch.cuda.amp.autocast only affects CUDA ops, but CUDA is not available. Disabling.")
iteration: 0%| | 0/1050 [01:34<?, ?it/s]
epochs: 0%| | 0/20 [01:35<?, ?it/s]
Traceback (most recent call last):
File "c:\python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\python39\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Python39\Scripts\imagine.exe_main
.py", line 7, in
File "c:\python39\lib\site-packages\deep_daze\cli.py", line 151, in main
fire.Fire(train)
File "c:\python39\lib\site-packages\fire\core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "c:\python39\lib\site-packages\fire\core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "c:\python39\lib\site-packages\fire\core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "c:\python39\lib\site-packages\deep_daze\cli.py", line 147, in train
imagine()
File "c:\python39\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\python39\lib\site-packages\deep_daze\deep_daze.py", line 584, in forward
_, loss = self.train_step(epoch, i)
File "c:\python39\lib\site-packages\deep_daze\deep_daze.py", line 505, in train_step
out, loss = self.model(self.clip_encoding)
File "c:\python39\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\python39\lib\site-packages\deep_daze\deep_daze.py", line 200, in forward
out = self.model()
File "c:\python39\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 148, in forward
out = self.net(coords, mods)
File "c:\python39\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 83, in forward
x = layer(x)
File "c:\python39\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 51, in forward
out = self.activation(out)
File "c:\python39\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\python39\lib\site-packages\siren_pytorch\siren_pytorch.py", line 22, in forward
return torch.sin(self.w0 * x)
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:79] data. DefaultCPUAllocator: not enough memory: you tried to allocate 268435456 bytes.

@LolekLiam
Copy link
Author

what i do?

@geeknik
Copy link

geeknik commented Oct 10, 2021

The last line of the error explains it perfectly. Your device ran out of memory and you’ll need to tweak the settings to fit your device as the defaults aren’t going to work for you. Good luck! 👍🏻

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants