Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The loss might be nagative value #6

Open
begeekmyfriend opened this issue Apr 23, 2019 · 11 comments
Open

The loss might be nagative value #6

begeekmyfriend opened this issue Apr 23, 2019 · 11 comments

Comments

@begeekmyfriend
Copy link

Thanks for your good jobs. But I have a question. I have transported your code into my project and it worked at that time. However after several steps the loss became nagative. And I found that it was the log_var item led to that. When I removed log_var item the loss would be all right. So I want to know if there is any better solution for that? Thanks again!

loss += K.sum(precision * (y_true - y_pred)**2. + log_var[0], -1)
@gakkiri
Copy link

gakkiri commented May 2, 2019

Please let me know if you have any progress on the same problem. thx.

@decewei
Copy link

decewei commented Jul 11, 2019

Does it make sense to clip the value at 0? Or clip the loss at 0?

@lilaczhou
Copy link

@begeekmyfriend @joinssmith @celinew1221 @yaringal Do you have any progress?I meet the same problem,and I donot think we can remove log_var,because it is a way for measuring uncertainty,if it is removed,how do we self-learn weights for every single task in multilearning task.
So have you found better solutions?

@decewei
Copy link

decewei commented Aug 25, 2019

@begeekmyfriend @joinssmith @celinew1221 @yaringal Do you have any progress?I meet the same problem,and I donot think we can remove log_var,because it is a way for measuring uncertainty,if it is removed,how do we self-learn weights for every single task in multilearning task.
So have you found better solutions?

During implementation, I think it makes sense to clip log_var at 0. That's what I did.

@lilaczhou
Copy link

@celinew1221 Thank you for your helping.Would you mind tell me how to implement "clip log_var at 0“。

@decewei
Copy link

decewei commented Aug 26, 2019

@celinew1221 Thank you for your helping.Would you mind tell me how to implement "clip log_var at 0“。

I'd just use torch.clamp(log_var, min=0)

@ghoshaw
Copy link

ghoshaw commented Oct 24, 2019

@celinew1221 , But in paper "Geometric loss functions for camera pose regression with deep learning", the init value of log_var is -3.0. So I think it make no sense to clip the value at 0.

@decewei
Copy link

decewei commented Oct 24, 2019 via email

@ghoshaw
Copy link

ghoshaw commented Oct 25, 2019

Did your clip method has better result than fixed parameter?

@cswwp
Copy link

cswwp commented Nov 16, 2019

Hi, @yaringal. I also meet this problem, and i want to know what's your understanding, thank you

@yaringal
Copy link
Owner

If you have a Gaussian likelihood (equivalently, Euclidean loss) then the likelihood can take values larger than 1 (it's a density, not a probability mass). So the loss+penalty terms (i.e. negative log Gaussian density) can indeed become negative - this is not a bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants