Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For fairseq fix missing dataset, model var initialization #3471

Closed
wants to merge 1 commit into from

Conversation

shoaib42
Copy link

A simple fix for fairseq issue.

Issue : When loading the model we get

Traceback (most recent call last):
  File "/home/shoaib/temp.py", line 6, in <module>
    api = TTS("tts_models/pol/fairseq/vits")
  File "/home/shoaib/.local/lib/python3.10/site-packages/TTS/api.py", line 74, in __init__
    self.load_tts_model_by_name(model_name, gpu)
  File "/home/shoaib/.local/lib/python3.10/site-packages/TTS/api.py", line 171, in load_tts_model_by_name
    model_path, config_path, vocoder_path, vocoder_config_path, model_dir = self.download_model_by_name(
  File "/home/shoaib/.local/lib/python3.10/site-packages/TTS/api.py", line 129, in download_model_by_name
    model_path, config_path, model_item = self.manager.download_model(model_name)
  File "/home/shoaib/.local/lib/python3.10/site-packages/TTS/utils/manage.py", line 385, in download_model
    model_item, model_full_name, model, md5sum = self._set_model_item(model_name)
  File "/home/shoaib/.local/lib/python3.10/site-packages/TTS/utils/manage.py", line 304, in _set_model_item
    model_full_name = f"{model_type}--{lang}--{dataset}--{model}"
UnboundLocalError: local variable 'dataset' referenced before assignment

Both dataset and model are not defined/set in the branch that handles fairseq.

Addition : Initialized the variables in the branch.

Maybe its better to conform with the rest of the code, where the variables are unpacked from the str.split. What do you think?

PS - could not do a successful make install and subsequent test, but successfully tested on applying the changes and testing with this code

import torch
from TTS.api import TTS

# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
api = TTS("tts_models/pol/fairseq/vits")
api.tts_with_vc_to_file(
    "Dzień dobry",
    speaker_wav="speech.wav",
    file_path="output.wav"
)

@CLAassistant
Copy link

CLAassistant commented Dec 27, 2023

CLA assistant check
All committers have signed the CLA.

@yodatak
Copy link

yodatak commented Jan 3, 2024

Thanks it works for me by patching the existing pip install TTS /home/yodatak/.local/lib/python3.10/site-packages/TTS/utils/manage.py

Copy link

stale bot commented Feb 3, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.

@stale stale bot added the wontfix This will not be worked on but feel free to help. label Feb 3, 2024
@stale stale bot closed this Feb 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on but feel free to help.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants