-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: max-tokens for completion #12
Comments
should be pretty reasonable to PR if you wanna give it a try! just need to add a field to the add a set a default value for it in the and pass it to the only thing idk how to handle is how to represent the "infinity" option on a slider 🤔 |
also hmmm except guessing this is just an oversight @ctjlewis, or is there a good reason i'm missing? |
I think it would be nice to separate settings to settings that are relevant to generating GPT responses and settings that are relevant to completing new words. With chat we can't work with max_tokens but when it comes to completing next words we can. I think we can take also inspiration from TypingMind in this (and many more). |
100%. if you want to work on this that'd be super appreciated, though i'd recommend sharing a sketch of whatever you propose implementing before spending the time writing the code cuz i may be a little picky here |
That was just copy-pasted from the type libs. They may have updated it since then. I don't exactly recall why that was easier - I think it was a way to keep the openai libs out as a dependency. We can update them by hand. |
Yeah, will do some basic sketch in Figma. How I would approach this would be like keep the modal but just add tabs (from Chakra). There will be 3 tabs - Global, GPT and Completion. In global there will be everything that is there now except model and temperature. In GPT and Completion there will be model and temperature plus in Completion there will be max tokens. Two things I am not sure about:
|
Maybe lets change to "Chat" and "Completion" to make it more clear. Also can just put a little paragraph before the inputs once you select the tab.
yes lets add all of them if possible. including raw |
Yeah that makes much more sense. |
Just want to let I will have time to work on this on weekend, if someone wants he can pick it up |
@transmissions11 Pinned types are updated so max_tokens is supported. In the original API release, OpenAI did not include that I guess (1.1.0). |
@AdamSchinzel are you still working on this would like to pick it up. |
should add a setting to constrain the max # of tokens in completions
The text was updated successfully, but these errors were encountered: