You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi.
I have noticed the operation here ( features = (features - 0.5)*2) in Generative Adversarial Networks (GAN). I don't understand why we need to do this here. The mean and variance of MINIST dataset are 0.1307 and 0.3081. Can you please explain the meaning of doing so? Looking forward to your reply.
The text was updated successfully, but these errors were encountered:
Good question. Which notebook is that? My spontaneous thought is that I probably did that because PyTorch's data transformation normalizes pixels to [0, 1] range, and I wanted to have the images in [-1, 1] range.
https://github.com/rasbt/deeplearning-models/blob/master/pytorch_ipynb/gan/gan.ipynb
Why we need the images in [-1,1] range? What is the difference between the ranges of [0,1] and [-1,1]? Does the image range have a big influence on the net performance? Didn’t see a similar operation in the previous network. So why in this net we need to do so? Thank you again for your reply.
Usually gradient descent behaves a bit better if the values are centered at 0. (Ideally, the mean should be zero). In practice, I don't notice big differences though to be honest.
Hi.
I have noticed the operation here ( features = (features - 0.5)*2) in Generative Adversarial Networks (GAN). I don't understand why we need to do this here. The mean and variance of MINIST dataset are 0.1307 and 0.3081. Can you please explain the meaning of doing so? Looking forward to your reply.
The text was updated successfully, but these errors were encountered: