-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about the inverse autoregressive flow #11
Comments
Hi @uestcwangxiao, what we are doing is similar to these two papers where they use normalizing flows to represent flexible approximate posterior distributions: We are learning normalizing flows to represent q(z|x) not p(x). |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi there, I am just confused by the inverse autoregressive flow. Do you use another network model to fit the distribution q(z|x)? Can I understand in the following way?
As far as my know, in the flow based model, people want to model the data distribution p(x) , so from a random z, we can get x=f(z). Here in this paper , q(z|x) is your data distribution you will model, you train another network g, again, from a random e~N(0,I), you get a z=g(e), then you can sample a realistic image through the decoder of the NVAE model, Decoder(z).
The text was updated successfully, but these errors were encountered: