-
Notifications
You must be signed in to change notification settings - Fork 15.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixing total cost finetuned model giving zero #5144
Conversation
@@ -92,7 +92,7 @@ def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: | |||
completion_tokens = token_usage.get("completion_tokens", 0) | |||
prompt_tokens = token_usage.get("prompt_tokens", 0) | |||
model_name = response.llm_output.get("model_name") | |||
if model_name and model_name in MODEL_COST_PER_1K_TOKENS: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm this will error on custom model names. could we create a small helper function:
def known_model_cost(model_name: str) -> bool:
if ":ft-" in model_name:
model_name = model_name.split(":")[0] + "-finetuned"
return model_name in MODEL_COST_PER_1K_TOKENS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
custom model names
Uhm. I removed the check model_name in MODEL_COST_PER_1K_TOKENS
because I thought it wouldn't be possible to initialize an OpenAI LLM with custom model name to start with, as langchain/llms/openai.py
line 521
will raise an error.
This was the error behind #2887 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that error is only raised if you call that function. you can still initialize with a custom name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes sorry, you can initialize the model indeed.
I meant when you generate a response from your initialized OpenAI model.
You always call get_sub_prompts
which in turns calls max_tokens_for_prompt
that finally calls modelname_to_contextsize
. However, max_tokens_for_prompt
is only called if params["max_tokens"] == -1
.
Ok. Then yes, I agree, most general solution is to create a helper function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about something like
def get_based_model_name(
model_name:str,
) -> str:
if "ft-" in model_name:
model_name = f"{model_name.split(':')[0]}-finetuned"
return model_name
def get_openai_token_cost_for_model(
model_name: str, num_tokens: int, is_completion: bool = False
) -> float:
model_name = get_based_model_name(model_name)
suffix = "-completion" if is_completion and model_name.startswith("gpt-4") else ""
model = model_name.lower() + suffix
if model not in MODEL_COST_PER_1K_TOKENS:
raise ValueError(
f"Unknown model: {model_name}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(MODEL_COST_PER_1K_TOKENS.keys())
)
return MODEL_COST_PER_1K_TOKENS[model] * num_tokens / 1000
...
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Collect token usage."""
if response.llm_output is None:
return None
self.successful_requests += 1
if "token_usage" not in response.llm_output:
return None
token_usage = response.llm_output["token_usage"]
completion_tokens = token_usage.get("completion_tokens", 0)
prompt_tokens = token_usage.get("prompt_tokens", 0)
model_name = get_based_model_name(
response.llm_output.get("model_name")
)
if model_name and model_name in MODEL_COST_PER_1K_TOKENS:
completion_cost = get_openai_token_cost_for_model(
model_name, completion_tokens, is_completion=True
)
prompt_cost = get_openai_token_cost_for_model(model_name, prompt_tokens)
self.total_cost += prompt_cost + completion_cost
self.total_tokens += token_usage.get("total_tokens", 0)
self.prompt_tokens += prompt_tokens
self.completion_tokens += completion_tokens
# OpanAI finetuned model giving zero tokens cost Very simple fix to the previously committed solution to allowing finetuned Openai models. Improves langchain-ai#5127 --------- Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
OpanAI finetuned model giving zero tokens cost
Very simple fix to the previously committed solution to allowing finetuned Openai models.
Improves #5127
Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@agola11