Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using fp16 training error #75

Open
guijuzhejiang opened this issue Aug 22, 2023 · 0 comments
Open

Using fp16 training error #75

guijuzhejiang opened this issue Aug 22, 2023 · 0 comments

Comments

@guijuzhejiang
Copy link

When using the parameter fp16=True, the training error is as follows:
fp16 training is a commonly used method to save video memory overhead. How do I modify the code?

warnings.warn(warning.format(ret))
0%| | 0/300000 [00:06<?, ?it/s]
Traceback (most recent call last):
File "/home/zzg/workspace/pycharm/HR-VITON/train_condition.py", line 497, in
main()
File "workspace/pycharm/HR-VITON/train_condition.py", line 488, in main
train(opt, train_loader, val_loader, test_loader, board, tocg, D)
File "workspace/pycharm/HR-VITON/train_condition.py", line 280, in train
loss_G.backward()
File "miniconda3/envs/py310_DL_cu118/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "miniconda3/envs/py310_DL_cu118/lib/python3.10/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Found dtype Half but expected Float

Process finished with exit code 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant