We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在你们给出的cpp示例中,auto QwenTokenizer::build_prompt(const std::vector<std::string> &history) const -> std::string { 这个函数指出,如果是多轮对话时,会重新构建user和assistant的信息。但是构建历史assistant信息的时候使用的是<|im_start|>" << history[i + 1] << "<|im_end|>",而不是使用<|im_start|>assistant\n" << history[i + 1] << "<|im_end|>"。这样会导致无法在多轮对话行中使用kv cache,每次必须要重新构建prompt,然后生成新的kv cache。我的理解是合理的吗?
No response
- OS: - Python: - Transformers: - PyTorch: - CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
The text was updated successfully, but these errors were encountered:
No branches or pull requests
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
在你们给出的cpp示例中,auto QwenTokenizer::build_prompt(const std::vector<std::string> &history) const -> std::string { 这个函数指出,如果是多轮对话时,会重新构建user和assistant的信息。但是构建历史assistant信息的时候使用的是<|im_start|>" << history[i + 1] << "<|im_end|>",而不是使用<|im_start|>assistant\n" << history[i + 1] << "<|im_end|>"。这样会导致无法在多轮对话行中使用kv cache,每次必须要重新构建prompt,然后生成新的kv cache。我的理解是合理的吗?
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: