-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss_l : inf in training ? #35
Comments
@transcendentsky You may look at this pull for details. |
I have tried this way, but the problem still exists. |
@transcendentsky Once you got the inf loss, what is your next batch loss? Does it still inf? I actually also meet some inf loss in training the COCO, but it seems OK for final convergence. |
I tracked the code and find the problem comes to prior ( |
@transcendentsky The current code has no been tested in Python2. |
Hello, first of all, thanks for your code releasing.
I got the training loss inf, acutally loss_l = inf, i use your original code (only fixed some bug), but i don't know why i got inf.
Parameters: lr:0.004, batchsize:32, base_model:vgg_reducedfc.pth
GPU: 1080ti
Any comments will be appreciated.
Thanks very much!
The text was updated successfully, but these errors were encountered: