Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix prompt caching on llama.cpp endpoints #920

Merged
merged 2 commits into from
Mar 11, 2024

Conversation

reversebias
Copy link
Contributor

In versions of llama.cpp since 3677, the prompt cache is dropped by the server unless cache_prompt: true is included in the request.

This change reduces prompt processing times in long chat threads: local inference with large models can have 10s of seconds of processing time for chats with 1000s of context tokens, this massively improves the responsiveness.

@nsarrazin nsarrazin merged commit eb071be into huggingface:main Mar 11, 2024
3 checks passed
@nsarrazin
Copy link
Collaborator

Thanks for the contribution! 🚀

ice91 pushed a commit to ice91/chat-ui that referenced this pull request Oct 30, 2024
Explicitly enable prompt caching on llama.cpp endpoints

Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants