-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update count_tokens.py #459
Conversation
- integrated returns into main snippet - updated code comments - pulled text of prompts out of the requests to generate_content
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drive-by review
# Returns the "context window" for the model (the combined input and output token limits) | ||
|
||
# Returns the "context window" for the model, | ||
# which is the combined input and output token limits. | ||
print(f"{model_info.input_token_limit=}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here the output is printed as key=value
, whereas in most other places it's key: value
.
Is there some system/method behind this?
print(response.usage_metadata) | ||
# ( prompt_token_count: 11, candidates_token_count: 73, total_token_count: 84 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the token count differs between the two approaches: 10 vs 11
print(response.usage_metadata) | ||
# ( prompt_token_count: 264, candidates_token_count: 80, total_token_count: 345 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some off-by-1 issues here: prompt token count 263 vs 264
Also 264+80 != 345
prompt = "Please give a short summary of this file." | ||
|
||
# Call `count_tokens` to get input token count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sample uploads a text file, not a video - should the comment be amended?
Aligning with the Python samples per google-gemini/generative-ai-python#459 Main change is in the way output of snippets is marked
Aligning with the Python samples per google-gemini/generative-ai-python#459 Main change is in the way output of snippets is marked
Description of the change
Motivation
Alignment and clarification of docs snippets
Type of change
Documentation - code snippets
Checklist
git pull --rebase upstream main
).