Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(max tokens): increase max tokens from 6k to 15k and add to config #29

Merged
merged 4 commits into from
Jul 25, 2024

Conversation

alissonperez
Copy link
Owner

What was done?

This pull request introduces the ability for users to configure the maximum tokens for LLM input, increasing the limit from 6000 to 15000. This enhancement allows for greater flexibility in handling larger inputs for language model processing.

How was it done?

  • Updated the configuration to include a new parameter INPUT_MAX_TOKENS which defaults to 15000.
  • Modified the set_user_config method to allow integer values to be set in the configuration.
  • Adjusted the logic in the get_pr_data function to check against the new maximum token limit when processing input.
  • Updated documentation to reflect changes in the configuration endpoint.

How was it tested?

  • Added unit tests to verify that the INPUT_MAX_TOKENS can be set and retrieved correctly.
  • Ensured that the application correctly raises exceptions when the input exceeds the configured maximum tokens.
  • Verified that the changes do not break existing functionality by running all existing tests.

@alissonperez alissonperez merged commit e21f67a into main Jul 25, 2024
2 checks passed
@alissonperez alissonperez deleted the feat/increase-max-tokens branch July 25, 2024 18:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant