You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are trying to get a latent representation of real data by passing it through some encoder. After passing the real data (x) through the encoder, we pass both the latent representation of x as well as x to the discriminator for classifying the example x as real or fake. Now, the output of the discriminator can be thought of as the probability of an example being real. So the parameters of the encoder should be updated in such a way that it maximizes the the discriminator output and in this way the encoder will be able to learn a good latent representaion of the real data. But in the paper it is said that we need to minimize discriminator output w.r.t. the parameters of encoder. Why is it so?
Please mercy me if my understanding is wrong.
The text was updated successfully, but these errors were encountered:
We are trying to get a latent representation of real data by passing it through some encoder. After passing the real data (x) through the encoder, we pass both the latent representation of x as well as x to the discriminator for classifying the example x as real or fake. Now, the output of the discriminator can be thought of as the probability of an example being real. So the parameters of the encoder should be updated in such a way that it maximizes the the discriminator output and in this way the encoder will be able to learn a good latent representaion of the real data. But in the paper it is said that we need to minimize discriminator output w.r.t. the parameters of encoder. Why is it so?
Please mercy me if my understanding is wrong.
The text was updated successfully, but these errors were encountered: