Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect number of convolutional layers in SegNet model #5

Open
glhr opened this issue Mar 28, 2021 · 0 comments
Open

Incorrect number of convolutional layers in SegNet model #5

glhr opened this issue Mar 28, 2021 · 0 comments

Comments

@glhr
Copy link

glhr commented Mar 28, 2021

In the encoder and decoder blocks, there can never be more than 2 convolutional layers, even if the n_blocks argument is > 2, since there is never more than a single layer added:

        layers = [nn.Conv2d(n_in_feat, n_out_feat, 3, 1, 1),
                  nn.BatchNorm2d(n_out_feat),
                  nn.ReLU(inplace=True)]

        if n_blocks > 1:
            layers += [nn.Conv2d(n_out_feat, n_out_feat, 3, 1, 1),
                       nn.BatchNorm2d(n_out_feat),
                       nn.ReLU(inplace=True)]
            if n_blocks == 3:
                layers += [nn.Dropout(drop_rate)]

So this means that encoder_n_layers = (2, 2, 3, 3, 3) does not actually ever produce 3-layer blocks. It only adds drop-out. Same issue for the decoder. So this model doesn't match the VGG/SegNet model.
Instead, it should be something like:

        layers = []
        for layer in range(n_blocks):
            layers += [nn.Conv2d(n_in_feat if layer == 0 else n_out_feat, n_out_feat, 3, 1, 1),
                       nn.BatchNorm2d(n_out_feat),
                       nn.ReLU(inplace=True)]
        if n_blocks > 2:
                layers += [nn.Dropout(drop_rate)]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant