-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
math.sqrt(0.5) #67
Comments
It comes from Deep Voice 3 and https://arxiv.org/abs/1705.03122. The reason is that I wrote code based on https://github.com/r9y9/deepvoice3_pytorch. Probably work without it. Same for weight normalization: Lines 63 to 64 in 1b571a7
|
Thank you! Anything else to watch out for? Working with the Tacotron 2 authors to reimplement their paper. This means, for example, I am not normalizing my spectrogram to [0, 1]. I did not apply any decibel clipping. Etc. What else is different in your Wavenet from the original Wavenet? For example, the ReLU during upsampling that I mentioned earlier. |
Hi @r9y9 , skips = None
for f in self.conv_layers:
x, h = f(x, c, g_bct)
if skips is None:
skips = h
else:
skips += h
if self.legacy:
skips *= math.sqrt(0.5)
x = skips In order to maintain the same variance, I think it should be: skips = None
for f in self.conv_layers:
x, h = f(x, c, g_bct)
if skips is None:
skips = h
else:
skips += h
x = skips / math.sqrt(len(self.conv_layers)) As in transformer network paper, the authors claim that it helps when feature dimension is large. As in our case, it will help if number of layers is large. I haven't tested it yet, but I think it will improve the performance of model. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hey There!
The Wavenet paper never mentioned multiplying skip connections with
math.sqrt(0.5)
. What is up with that?wavenet_vocoder/wavenet_vocoder/wavenet.py
Line 224 in 1b571a7
The text was updated successfully, but these errors were encountered: