You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using custom data, I tried to train a vits2 model using the vits2_ljs_base.json config. However, after 306 epochs I encountered the following error:
Loading train data: 83%|████████▎ | 205/246 [04:17<00:51, 1.26s/it]
Traceback (most recent call last):
File "/space/space-lenovo-7/arthur/vits2_pytorch/train.py", line 603, in <module>
main()
File "/space/space-lenovo-7/arthur/vits2_pytorch/train.py", line 48, in main
mp.spawn(
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/space/space-lenovo-7/arthur/vits2_pytorch/train.py", line 264, in run
train_and_evaluate(
File "/space/space-lenovo-7/arthur/vits2_pytorch/train.py", line 346, in train_and_evaluate
) = net_g(x, x_lengths, spec, spec_lengths)
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0]) # type: ignore[index]
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/space/space-lenovo-7/arthur/vits2_pytorch/models.py", line 1282, in forward
l_length = self.dp(x, x_mask, w, g=g)
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/space/space-lenovo-7/arthur/vits2_pytorch/models.py", line 102, in forward
z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
File "/space/space-lenovo-7/arthur/vits2_pytorch/myvenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/space/space-lenovo-7/arthur/vits2_pytorch/modules.py", line 504, in forward
x1, logabsdet = piecewise_rational_quadratic_transform(
File "/space/space-lenovo-7/arthur/vits2_pytorch/transforms.py", line 31, in piecewise_rational_quadratic_transform
outputs, logabsdet = spline_fn(
File "/space/space-lenovo-7/arthur/vits2_pytorch/transforms.py", line 82, in unconstrained_rational_quadratic_spline
) = rational_quadratic_spline(
File "/space/space-lenovo-7/arthur/vits2_pytorch/transforms.py", line 114, in rational_quadratic_spline
if torch.min(inputs) < left or torch.max(inputs) > right:
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.
The text was updated successfully, but these errors were encountered:
Using custom data, I tried to train a vits2 model using the vits2_ljs_base.json config. However, after 306 epochs I encountered the following error:
The text was updated successfully, but these errors were encountered: