You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 11, 2023. It is now read-only.
In theory, it doesn't matter. F0 normalized to 0-1 should be accurate to 1e-7, flaot just right.
But the cumsum operation after the float type converted to double is written by the person who wrote NFS_HIFIGAN. What exactly is the effect that can be raised issue under the NFS_HIFIGAN project.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
OS version
Darwin arm64
GPU
mps
Python version
Python 3.8.16
PyTorch version
2.0.0
Branch of sovits
4.0(Default)
Dataset source (Used to judge the dataset quality)
N/A
Where thr problem occurs or what command you executed
inference
Situation description
Tips:
PYTORCH_ENABLE_MPS_FALLBACK=1
-d mps
issues:
f0_coarse
to int #142related codes:
so-vits-svc/vdecoder/nsf_hifigan/models.py
Lines 144 to 146 in 0298cd4
so-vits-svc/vdecoder/nsf_hifigan/models.py
Lines 159 to 162 in 0298cd4
There are some
double
type casts in the source code. Is it required?Some methods related to
double
are not implemented inmps
devices.I think
float
is enough, but I am not sure.I have modified and tested locally, and it works well.
Is there a significant loss of precision in moving the
torch.cumsum
operation fromdouble
tofloat
?CC: @ylzz1997
Log
Supplementary description
No response
The text was updated successfully, but these errors were encountered: