-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The loss might be nagative value #6
Comments
Please let me know if you have any progress on the same problem. thx. |
Does it make sense to clip the value at 0? Or clip the loss at 0? |
@begeekmyfriend @joinssmith @celinew1221 @yaringal Do you have any progress?I meet the same problem,and I donot think we can remove log_var,because it is a way for measuring uncertainty,if it is removed,how do we self-learn weights for every single task in multilearning task. |
During implementation, I think it makes sense to clip log_var at 0. That's what I did. |
@celinew1221 Thank you for your helping.Would you mind tell me how to implement "clip log_var at 0“。 |
I'd just use torch.clamp(log_var, min=0) |
@celinew1221 , But in paper "Geometric loss functions for camera pose regression with deep learning", the init value of log_var is -3.0. So I think it make no sense to clip the value at 0. |
Well, that depends on the losses. It makes no sense to have negative loss in a cross entropy loss function tho. This is really a question for the original author. In my case, I don’t know what to do if not clip at 0 or use the absolute value.
… On Oct 24, 2019, at 8:17 AM, ghoshaw ***@***.***> wrote:
@celinew1221 , But in paper "Geometric loss functions for camera pose regression with deep learning", the init value of log_var is -3.0. So I think it make no sense to clip the value at 0.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Did your clip method has better result than fixed parameter? |
Hi, @yaringal. I also meet this problem, and i want to know what's your understanding, thank you |
If you have a Gaussian likelihood (equivalently, Euclidean loss) then the likelihood can take values larger than 1 (it's a density, not a probability mass). So the loss+penalty terms (i.e. negative log Gaussian density) can indeed become negative - this is not a bug. |
Thanks for your good jobs. But I have a question. I have transported your code into my project and it worked at that time. However after several steps the loss became nagative. And I found that it was the
log_var
item led to that. When I removedlog_var
item the loss would be all right. So I want to know if there is any better solution for that? Thanks again!The text was updated successfully, but these errors were encountered: