You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't quite understand the calculation of the log-likelihood
# We compute the test log-likelihood
ll = (logsumexp(-0.5 * self.tau * (y_test[None] - Yt_hat)**2., 0) - np.log(T)
- 0.5*np.log(2*np.pi) + 0.5*np.log(self.tau))
test_ll = np.mean(ll)
why is the logsumexp used? and why are the predictive variances not used?
I tried to calculate the test log likelihood like this:
from scipy.stats import norm
pred_var = np.var(Yt_hat, axis = 0) + 1 / self.tau
ll = []
for i in range(y_test.shape[0]):
ll.append(norm.logpdf(y_test[i][0], MC_pred[i][0], np.sqrt(pred_var[i][0])))
new_test_ll = np.mean(ll)
And it usually generates slightly worse log likelihood. For example, using the concrete dataset, with split id set to 19, the log likelihood given by the original code is -3.17, while the log likelihood given by the above code is -3.25.
The text was updated successfully, but these errors were encountered:
I don't quite understand the calculation of the log-likelihood
why is the
logsumexp
used? and why are the predictive variances not used?I tried to calculate the test log likelihood like this:
And it usually generates slightly worse log likelihood. For example, using the
concrete
dataset, with split id set to 19, the log likelihood given by the original code is -3.17, while the log likelihood given by the above code is -3.25.The text was updated successfully, but these errors were encountered: