-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attention map implementation #9
Comments
This is the self-attention module. |
Hello, Thanks for your comment. I am wondering why you only have it in generator_test and discriminator_test modules. Why two different architectures are defined for the generator and discriminator? Best Regards, |
Sorry, I am not the coder. But I think one is SN-GAN, the other is SA-GAN. |
Thanks for your response. That makes sense. Because in the SAGAN paper both during training and test time, there is attention layer in both the discriminator and the generator. |
It's my pleasure~ |
Hi,
I couldn't find the implementation of the attention layer inside the network models. In the SAGAN paper it is mentioned that they have added the self-attention mechanism at different stages and compared them with each other. Would you please let me know where you have considered that?
Bests,
Samaneh
The text was updated successfully, but these errors were encountered: