Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MergeLoss with regular item is \log_{sigma} in paper but \log_{sigma}^2 in code #4

Open
songzeballboy opened this issue Feb 25, 2019 · 3 comments

Comments

@songzeballboy
Copy link

No description provided.

@songzeballboy songzeballboy changed the title $ MergeLoss with regular item is \log_{sigma} in paper but \log_{sigma}^2 in code Feb 25, 2019
@JadTawil-theonly
Copy link

I see that as well, should it not be ( precision**2 )/ 2 instead of just precision?

@antgr
Copy link

antgr commented Oct 15, 2019

So, what is the correct? Did you try them?

@knighthappy
Copy link

knighthappy commented Oct 29, 2019

if ‘precision = K.exp(-log_var[0])’,then the network learning $\log{\sigma}^2$,'precision * (y_true - y_pred)**2. + log_var[0]' is $\frac{1}{\sigma ^ 2} L(w) + 2 * \log{\sigma}$;

if ‘precision = K.exp(-log_var[0]) ** 2 / 2’,then the network learning $\log{\sigma}$,'precision * (y_true - y_pred)**2. + log_var[0]' is $\frac{1}{2 * \sigma ^ 2} L(w) + \log{\sigma}$;

The difference between the two is the coefficient 2,for network training ,they are the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants