Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument. #2555

Closed
offside609 opened this issue Apr 25, 2023 · 13 comments
Labels
bug Something isn't working wontfix This will not be worked on but feel free to help.

Comments

@offside609
Copy link

offside609 commented Apr 25, 2023

Describe the bug

I am training a voice cloning model using VITS. My dataset is in LJSpeech Format. I am trying to train Indian English model straight from character with Phonemizer = False. The training runs for 35-40 epochs and then abruptly stops. Sometimes it runs for even longer, like 15k steps and then stops. I can share the notebook I am using for training. I have successfully completed my training with this notebook several times, but in recent times I am facing this error.

Also I am getting this warning at the beginning of the training.

/usr/local/lib/python3.10/dist-packages/torch/functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:862.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]

I am providing the screenshots of the error I encounter everytime.
Screenshot 2023-04-13 at 11 08 42 PM
Screenshot 2023-04-13 at 11 09 00 PM
Screenshot 2023-04-13 at 11 09 14 PM

To Reproduce

https://colab.research.google.com/drive/1k8Fk5kfU_aZ2lM7Esih3Ud1fYtNlujOQ?authuser=0#scrollTo=A49iDwajBtu_

I am using this colab notebook for training purpose. Every configuration regarding the training can be referred from here. Mind that training will go on for 35-40 epochs then it will stop.

Expected behavior

Training should continue.

Logs

No response

Environment

https://colab.research.google.com/drive/1k8Fk5kfU_aZ2lM7Esih3Ud1fYtNlujOQ?authuser=0#scrollTo=A49iDwajBtu_

Additional context

I have tried to resolve the warning and error both as I think both are related.
I tried following solutions to resolve the warning.
jaywalnut310/vits#15
and the following to solve the error.
#1949
Looks like Torch version == 1.8 is unstable and distribution is not available. I tried 1.9 too because github above prescribed it. Distribution not available.

@offside609 offside609 added the bug Something isn't working label Apr 25, 2023
@prakharpbuf
Copy link
Contributor

checkout #1949 it might be useful

@offside609
Copy link
Author

It was useful. But downgraded Torch version 1.8.0 is no longer stable and does not support GPU version. Please help.

@offside609 offside609 changed the title [Bug] [Bug] RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument. Apr 30, 2023
@stale
Copy link

stale bot commented May 31, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.

@stale stale bot added the wontfix This will not be worked on but feel free to help. label May 31, 2023
@mniiinm
Copy link

mniiinm commented Jun 5, 2023

exactly same problem. any fix?

@stale stale bot removed the wontfix This will not be worked on but feel free to help. label Jun 5, 2023
@mniiinm
Copy link

mniiinm commented Jun 6, 2023

It seems that the problem is from mixed_precision, and if change it to False, the error will be solved

@stale
Copy link

stale bot commented Jul 9, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.

@stale stale bot added the wontfix This will not be worked on but feel free to help. label Jul 9, 2023
@aqez
Copy link

aqez commented Jul 13, 2023

I'm also hitting this, anyone have any luck? I've tried no mixed precision but still get the issue.

Edit: I found my issue, it was actually my data.

@stale stale bot removed the wontfix This will not be worked on but feel free to help. label Jul 13, 2023
@mniiinm
Copy link

mniiinm commented Jul 14, 2023

I'm also hitting this, anyone have any luck? I've tried no mixed precision but still get the issue.

Edit: I found my issue, it was actually my data.

Can you tell about your data problem?

@mesut92
Copy link

mesut92 commented Jul 25, 2023

I got same error. I am using python 3.10, torch 2.0 . What is the problem about data?

@stale
Copy link

stale bot commented Sep 7, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.

@stale stale bot added the wontfix This will not be worked on but feel free to help. label Sep 7, 2023
@stale stale bot closed this as completed Sep 14, 2023
@lucasjinreal
Copy link

Half precision issue, just add dim arg

@yijingshihenxiule
Copy link

yijingshihenxiule commented Dec 26, 2023

Half precision issue, just add dim arg

@lucasjinreal Hello, could you please clarify? Need your help! Thank you!

@VafaKnm
Copy link

VafaKnm commented Jan 23, 2024

any update for this issue? I have same problem too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working wontfix This will not be worked on but feel free to help.
Projects
None yet
Development

No branches or pull requests

8 participants