-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set Working Default Models in HF Parsers #1221
Conversation
# The default model is openai/whisper-large-v3, which does not work as of | ||
# 02/13/2024. Instead, default to a free model (which supports remote | ||
# inference) with the next most "likes" in HF | ||
# https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=likes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for documenting
@@ -47,6 +47,13 @@ def refine_completion_params(model_settings: dict[Any, Any]) -> dict[str, Any]: | |||
if key.lower() in supported_keys: | |||
completion_data[key.lower()] = model_settings[key] | |||
|
|||
# The default model is suno/bark, which requires HF Pro subscription |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIt: I feel we should remove these as default from the input schema? Similar issue we've had where people can't use GPT-4 because that requires paid access
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIt: I feel we should remove these as default from the input schema? Similar issue we've had where people can't use GPT-4 because that requires paid access
Just so I understand, this comment is not relevant after the next PR, right? I just did them as separate PRs to separate concerns
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I guess my point is more than this comment "default model is suno/bark" won't apply anymore to this after next PR, but not a blocker
@@ -43,6 +43,13 @@ def refine_completion_params(model_settings: dict[Any, Any]) -> dict[str, Any]: | |||
if key.lower() in supported_keys: | |||
completion_data[key.lower()] = model_settings[key] | |||
|
|||
# The default model is openai/whisper-large-v3, which does not work as of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couple of nits, but the input schema is independent of the package itself so not a blocker, just a bit of confusion if we decide to update prompt schema and this comment then becomes out of sync
# Update Default Models in HF Prompt Schemas Update the prompt schemas for ASR and TTS remote inference to match the defaults set in #1221 <img width="1479" alt="Screenshot 2024-02-13 at 11 25 38 AM" src="https://github.com/lastmile-ai/aiconfig/assets/5060851/28e48030-0fc3-49d2-8c6b-ad0ea142d4d8"> <img width="1465" alt="Screenshot 2024-02-13 at 11 26 42 AM" src="https://github.com/lastmile-ai/aiconfig/assets/5060851/b5d132da-8777-47e9-95f1-73cf4f25a906"> --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/1222). * __->__ #1222 * #1221
# Update Default Models in HF Prompt Schemas Update the prompt schemas for ASR and TTS remote inference to match the defaults set in #1221 <img width="1479" alt="Screenshot 2024-02-13 at 11 25 38 AM" src="https://github.com/lastmile-ai/aiconfig/assets/5060851/28e48030-0fc3-49d2-8c6b-ad0ea142d4d8"> <img width="1465" alt="Screenshot 2024-02-13 at 11 26 42 AM" src="https://github.com/lastmile-ai/aiconfig/assets/5060851/b5d132da-8777-47e9-95f1-73cf4f25a906">
I messed up for 0.0.8 becuase I forgot to rebase. This includs Ryan's changes from #1221
# Add Remote Inference Parsers to Gradio Cookbook Parser Registry Was getting annoyed of manually updating this to test the remote parsers each time. Just copied registration from what we do in Gradio and did a quick check a few worked --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/1223). * __->__ #1223 * #1222 * #1221
Update HF extension to 0.0.9 I messed up for 0.0.8 becuase I forgot to rebase. This includs Ryan's changes from #1221
Set Working Default Models in HF Parsers
The default model parsers for AST and TTS don't work:
openai/whisper-large-v3
just throws 500 errors (even on the model card in HF)suno/bark
requires a HF PRO subscription to runFor now, we can just default to the next-best models for those tasks (based on like count) which support remote inference
Testing:
Ensure the model defaults work:
Note, was getting an error using the travel.aiconfig.json config
Stack created with Sapling. Best reviewed with ReviewStack.