-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Alibaba Cloud full modal docking error #2302
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,6 +26,8 @@ def new_instance(model_type, model_name, model_credential: Dict[str, object], ** | |
optional_params['max_tokens'] = model_kwargs['max_tokens'] | ||
if 'temperature' in model_kwargs and model_kwargs['temperature'] is not None: | ||
optional_params['temperature'] = model_kwargs['temperature'] | ||
if model_name == 'qwen-omni-turbo': | ||
optional_params['streaming'] = True | ||
return AliyunBaiLianSpeechToText( | ||
model=model_name, | ||
api_key=model_credential.get('api_key'), | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The provided code appears to be part of a function that initializes an instance of
Summary of Changes:No changes were suggested since the current implementation already addresses the expected use cases and best practices for setting up the parameters. If streaming isn't typically required across all instances of this speech-to-text service, it might make sense to remove the conditionals around enabling If you need further modifications or optimizations, feel free to let me know! |
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is one potential issue in the provided Python code:
The
new_instance
method does not handle other models besides 'qwen-omni-turbo'. If another model requires streaming to be enabled, this logic should also include that functionality.Here is a revised version of the
new_instance
method with added support for additional models:In this updated function, I've included placeholder values (
another_supported_model
) where you would need to add any additional models that might require streaming. The function catchesAttributeError
to ensure robustness even if there's an unexpected configuration change within your application. This ensures all relevant parameters are correctly set when creating aBaiLianChatModel
.