Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update networks.py #470

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Update networks.py #470

wants to merge 1 commit into from

Conversation

Roulbac
Copy link

@Roulbac Roulbac commented Dec 11, 2018

Last Conv layer needs to have a stride of 2, just like in the article to keep the down-sampling factor as 2.
For the 70x70 PatchGAN, the last conv layer that brings the tensor to an Nx1x1x1 needs a kernel size of 4, a stride of 1 and no padding.
With padding, the output tensor is of shape Nx1x3x3 instead of Nx1x1x1 (for a 70x70 input).

Edit:
After more checks, it turns out that the added correction makes the receptive field of size 64x64 rather than 70x70. It is actually a convolutional implementation of the sliding window with stride 16 (because we have 4 downsamplings of factor 2, 2^4). With an input of size 80x80, the output is of size 2x2 corresponding to the 4 sliding windows with stride 16 each.

Last Conv layer needs to have a stride of 2, just like in the article to keep the down-sampling factor as 2.
For the 70x70 PatchGAN, the last conv layer that brings the tensor to an Nx1x1x1 needs a kernel size of 4, a stride of 1 and no padding.
With padding, the output tensor is of shape Nx1x3x3 instead of Nx1x1x1 (for a 70x70 input).
@junyanz
Copy link
Owner

junyanz commented Dec 11, 2018

I will keep the PatchGAN consistent with the Torch code. But you are free to use other discriminators in your experiments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants