We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hello, i have a problem. when I use ffhq data with 256x256, g loss and d loss is nan.
here is log: D_loss: nan, D_loss_grad_norm: nan, D_lr: 0.001882 D_reg: 0.002352, D_reg_grad_norm: 0.001569, G_loss: nan G_loss_grad_norm: nan, G_lr: 0.0016, G_reg: nan G_reg_grad_norm: nan, pl_avg: nan, seen: 15 0%| | 14/1000000 [00:51<634:57:52, 2.29s/it]
The text was updated successfully, but these errors were encountered:
Hey,
Is the loss nan from the very first iteration? Or does it become nan after a couple of iterations?
What settings do you use when you run the training?
Sorry, something went wrong.
No branches or pull requests
hello, i have a problem. when I use ffhq data with 256x256, g loss and d loss is nan.
here is log:
D_loss: nan, D_loss_grad_norm: nan, D_lr: 0.001882
D_reg: 0.002352, D_reg_grad_norm: 0.001569, G_loss: nan
G_loss_grad_norm: nan, G_lr: 0.0016, G_reg: nan
G_reg_grad_norm: nan, pl_avg: nan, seen: 15
0%| | 14/1000000 [00:51<634:57:52, 2.29s/it]
The text was updated successfully, but these errors were encountered: