Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input and output mean scales updated during training (X-UMX) #685

Closed
DavidDiazGuerra opened this issue Nov 16, 2023 · 5 comments
Closed

Comments

@DavidDiazGuerra
Copy link
Contributor

Hello,

I've just realized that the elements of mean_scale in X-UMX are optimized during the training since they're registered as parameters. As far as I understand, that dictionary contains the pre-computed mean and std of the dataset for input normalization. Are they really supposed to be optimized during training? Shouldn't be registered as buffers rather than as parameters?

Best,
David

@mpariente
Copy link
Collaborator

Hello !

I don't master this architecture, people from Sony do : @r-sawata WDYT ?

@r-sawata
Copy link
Contributor

Sorry to have kept you waiting. I was busy due to CVPR deadline, and it has finally gone yesterday. I'll check this in a few days.

@DavidDiazGuerra
Copy link
Contributor Author

The easiest solution would be using requires_grad=False as done with the STFT window, but this can generate problems if people try to do transfer learning or fine-tuning and start freezing/unfreezing parts of the model without being aware of this (I've recently gone through this problem and was the reason I found this bug).

I would suggest registering both the scalers and the STFT window as buffers instead of as parameters since that's the general recommendation in Pytorch for model's tensors that are not being optimized. Doing this with the STFT window is quite straightforward (I can do a PR with a fix I've been testing that works well and is backward compatible with pre-trained models) but I'm unsure about how to do this with the scales since Pytorch doesn't have a nn.BufferDict function to replace nn.ParameterDict.

@r-sawata
Copy link
Contributor

r-sawata commented Mar 15, 2024

Sorry for being so late. I confirmed it carefully, and found that mean and scale with requires_grad=True is fine. Namely, optimizing them during training is our intention.

As you may know, this X-UMX is the extended version of the original one, Open-Unmix (UMX). As shown in the initialization of their implementation, input_mean and input_scale should be learned during training. Please see here:
https://github.com/sigsep/open-unmix-pytorch/blob/4318fb278e1863f4cf8556b513987faf14a15832/openunmix/model.py#L84-L95

@DavidDiazGuerra
Copy link
Contributor Author

Oh, okay. That's a bit weird to me, but if it works and the original UMX also did it that way I guess it's better to keep it like that.

Thanks,
David

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants