-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug in function accumulate #112
Comments
Is there a reason you made same issue for this repositories and this (rosinality/stylegan2-pytorch#172)? |
Just so people who use this repo gets a warning. Also, these are two unrelated repos |
@rosinality Why do you use EMA to update the model rather than directly optimize the model? |
@shoutOutYangJie The exponential moving average is used to increase stability and sample quality in GANs. Check Section C.1 of Large Scale GAN Training for High Fidelity Natural Image Synthesis |
@ParthaEth The original Implementation uses the function |
The accumulate function found in
train.py
- https://github.com/rosinality/style-based-gan-pytorch/blob/master/train.py#L25. Only accumulates running average of optimiser trainable parameters fo the generator model. If the generator has parameters that are not trained by optimiser such as the ones found in batch_norm layers, then this function will not accumulate those parameters. Stylegan model specifically does not have those kind of parameters so for this particular project this function is valid but should not be copied over to another project.The text was updated successfully, but these errors were encountered: