Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

math.sqrt(0.5) #67

Closed
PetrochukM opened this issue May 17, 2018 · 4 comments
Closed

math.sqrt(0.5) #67

PetrochukM opened this issue May 17, 2018 · 4 comments
Labels

Comments

@PetrochukM
Copy link
Contributor

Hey There!

The Wavenet paper never mentioned multiplying skip connections with math.sqrt(0.5). What is up with that?

skips *= math.sqrt(0.5)

@r9y9
Copy link
Owner

r9y9 commented May 17, 2018

https://github.com/pytorch/fairseq/blob/4973d05ac7651f46e1c7edc8060600b44baf9e71/fairseq/models/fconv.py#L157

It comes from Deep Voice 3 and https://arxiv.org/abs/1705.03122. The reason is that I wrote code based on https://github.com/r9y9/deepvoice3_pytorch. Probably work without it.

Same for weight normalization:

# If True, apply weight normalization as same as DeepVoice3
weight_normalization=True,
.

@PetrochukM
Copy link
Contributor Author

Thank you!

Anything else to watch out for?

Working with the Tacotron 2 authors to reimplement their paper. This means, for example, I am not normalizing my spectrogram to [0, 1]. I did not apply any decibel clipping. Etc.

What else is different in your Wavenet from the original Wavenet? For example, the ReLU during upsampling that I mentioned earlier.

@tuan3w
Copy link

tuan3w commented Jul 1, 2018

Hi @r9y9 ,
Currently, I'm not able to access to my GPU to test my hypothesis but I think we should discuss about this number. As I understand, the target of this multiplication is to keep output has same variance as its inputs [1] [2]. As from your code:

        skips = None
        for f in self.conv_layers:
            x, h = f(x, c, g_bct)
            if skips is None:
                skips = h
            else:
                skips += h
                if self.legacy:
                    skips *= math.sqrt(0.5)

        x = skips

In order to maintain the same variance, I think it should be:

        skips = None
        for f in self.conv_layers:
            x, h = f(x, c, g_bct)
            if skips is None:
                skips = h
            else:
                skips += h

        x = skips / math.sqrt(len(self.conv_layers))

As in transformer network paper, the authors claim that it helps when feature dimension is large. As in our case, it will help if number of layers is large. I haven't tested it yet, but I think it will improve the performance of model.
Thanks.

@stale
Copy link

stale bot commented May 30, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label May 30, 2019
@stale stale bot closed this as completed Jun 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants