How to let user choose one model to anwser question via python api? #3240
Unanswered
LeeMoofon0222
asked this question in
Q&A
Replies: 1 comment 40 replies
-
To ensure that only one model is used to answer the question in your Python API setup, you need to explicitly specify the model in the configuration and ensure that the API call is directed to only that model. Here is an example of how you can modify the def build(
self,
model: str,
provider: str,
api_key: Optional[str] = None,
streaming: bool = True,
temperature: Optional[float] = 0.7,
model_kwargs: Optional[Dict[str, Any]] = {},
top_p: Optional[float] = None,
top_k: Optional[int] = None,
n: int = 1,
max_tokens: int = 256,
max_retries: int = 6,
verbose: bool = False,
) -> LanguageModel:
try:
import litellm # type: ignore
litellm.drop_params = True
litellm.set_verbose = verbose
except ImportError:
raise ChatLiteLLMException(
"Could not import litellm python package. " "Please install it with `pip install litellm`"
)
provider_map = {
"OpenAI": "openai_api_key",
"Azure": "azure_api_key",
"Anthropic": "anthropic_api_key",
"Replicate": "replicate_api_key",
"Cohere": "cohere_api_key",
"OpenRouter": "openrouter_api_key",
}
# Set the API key based on the provider
api_keys: dict[str, Optional[str]] = {v: None for v in provider_map.values()}
if variable_name := provider_map.get(provider):
api_keys[variable_name] = api_key
else:
raise ChatLiteLLMException(
f"Provider {provider} is not supported. Supported providers are: {', '.join(provider_map.keys())}"
)
# Ensure only the specified model is used
if model not in ["gpt-3.5-turbo", "llama"]:
raise ValueError(f"Unsupported model: {model}")
LLM = ChatLiteLLM(
model=model,
client=None,
streaming=streaming,
temperature=temperature,
model_kwargs=model_kwargs if model_kwargs is not None else {},
top_p=top_p,
top_k=top_k,
n=n,
max_tokens=max_tokens,
max_retries=max_retries,
openai_api_key=api_keys["openai_api_key"],
azure_api_key=api_keys["azure_api_key"],
anthropic_api_key=api_keys["anthropic_api_key"],
replicate_api_key=api_keys["replicate_api_key"],
cohere_api_key=api_keys["cohere_api_key"],
openrouter_api_key=api_keys["openrouter_api_key"],
)
return LLM In this example, the |
Beta Was this translation helpful? Give feedback.
40 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'd like to ask how to choose just one model to anwser my question. In the langflow, I put the question to two model(GPT and llama), and then combine their output.
The code below is tried to remove LLM from the flow. But I found that it will still put the question to both model
Beta Was this translation helpful? Give feedback.
All reactions