Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] implement power low compression loss #18

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

stegben
Copy link
Contributor

@stegben stegben commented Sep 1, 2019

fixed: #14

Still in progress, not yet tested. I have to wait until my initial run finish...

@stegben stegben force-pushed the powerlaw-compression-loss branch from 6b2ca6f to 0acfd8e Compare September 1, 2019 11:30
@xiaozhuo12138
Copy link

Whether modifying the loss function will improve performance

@stegben
Copy link
Contributor Author

stegben commented Nov 28, 2019

Sorry my 1080ti is too weak. Could anybody with better hardware help me run the experiment?

@xiaozhuo12138
Copy link

My own server is too slow. But it is found that the loss function is different from what the author said. Now just use the amplitude spectrum

@linzwatt
Copy link

I have access to nvidia V100 GPUs, and I am currently training a modified version of this model. I can test this PR if you like

@stegben
Copy link
Contributor Author

stegben commented Jan 15, 2020

@linzwatt That would be awesome! Thanks a lot

@kwikwag
Copy link

kwikwag commented Feb 23, 2020

hey @linzwatt any results?

@kwikwag
Copy link

kwikwag commented Feb 29, 2020

I am not sure if I ran it right -- I used the config as provided and I get this:

2020-02-29 13:51:24,952 - INFO - Starting new training run
../torch/csrc/utils/python_arg_parser.cpp:698: UserWarning: This overload of add_ is deprecated:
        add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
        add_(Tensor other, Number alpha)
2020-02-29 13:51:34,106 - INFO - Wrote summary at step 1
2020-02-29 13:51:37,200 - ERROR - Loss exploded to nan at step 2!
2020-02-29 13:51:37,459 - INFO - Exiting due to exception: Loss exploded
Traceback (most recent call last):
  File "/mnt/disks/voicefilter-1/data/voicefilter/utils/train.py", line 91, in train
    raise Exception("Loss exploded")
Exception: Loss exploded

kwikwag added a commit to kwikwag/voicefilter that referenced this pull request Mar 2, 2020
@Edresson
Copy link

@kwikwag I think that its implementation is not following the formula of the paper, because in the second term I believe that it is not necessary to use torch.clamp (x, min = 0.0). Additionally the order of torch.pow and torch.abs is incorrect, following the formula you must first calculate torch.pow

Another point is that to avoid gradient explosion, an episolon must be added in output and target_mag.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Need to try power-law compression loss
5 participants