You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We understand that using computed likelihood directly for maximum likelihood estimation can be computationally intensive. However, we are curious if this technique is theoretically and practically feasible. We would like to experiment with this approach on smaller models and datasets. Any advice or code assistance would be greatly appreciated.
:)
Once again, thank you for your outstanding work.
The text was updated successfully, but these errors were encountered:
I use the code in the repository to calculate likelihood for maximum likelihood training. As a result, the pre-training of score function is lost, and the effect of training directly with likelihood is poor.
What you normally learn with the socre function is an irregular distribution (that's what data sets look like).
But what is learned by calculating the likelihood is its distribution after collapsing towards the middle:
Like Gaussian distributions or VAE, learned strategies can only be learned up to a peak.
Hello,
We understand that using computed likelihood directly for maximum likelihood estimation can be computationally intensive. However, we are curious if this technique is theoretically and practically feasible. We would like to experiment with this approach on smaller models and datasets. Any advice or code assistance would be greatly appreciated.
:)
Once again, thank you for your outstanding work.
The text was updated successfully, but these errors were encountered: