Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify MAX_NEW_TOKENS for ktransformers server #92

Open
arthurv opened this issue Sep 19, 2024 · 2 comments
Open

Specify MAX_NEW_TOKENS for ktransformers server #92

arthurv opened this issue Sep 19, 2024 · 2 comments

Comments

@arthurv
Copy link

arthurv commented Sep 19, 2024

max_new_tokens = 1000 by default, and this can be specified in ktransformers.local_chat through --max_new_tokens, but not the server.

Please add the --max_new_tokens option to the ktransformers server so we can specify longer output context lengths, and add more generation options (like input context, etc).

@Azure-Tang
Copy link
Contributor

Apologies for the inconvenience. If you’re building from source, you can modify the max_new_tokens parameter in ktransformers/server/backend/args.py. We will include this update in the next Docker release.

@bitbottrap
Copy link

I just encountered this limitation. It would be even better if the REST API honored the maximum context length and maximum number of generation tokens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants