Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 多轮对话的 prompt 应该如何构建? #81

Open
2 tasks done
791136190 opened this issue May 24, 2024 · 0 comments
Open
2 tasks done

[BUG] 多轮对话的 prompt 应该如何构建? #81

791136190 opened this issue May 24, 2024 · 0 comments

Comments

@791136190
Copy link

791136190 commented May 24, 2024

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

在你们给出的cpp示例中,auto QwenTokenizer::build_prompt(const std::vector<std::string> &history) const -> std::string { 这个函数指出,如果是多轮对话时,会重新构建user和assistant的信息。但是构建历史assistant信息的时候使用的是<|im_start|>" << history[i + 1] << "<|im_end|>",而不是使用<|im_start|>assistant\n" << history[i + 1] << "<|im_end|>"。这样会导致无法在多轮对话行中使用kv cache,每次必须要重新构建prompt,然后生成新的kv cache。我的理解是合理的吗?

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

@jklj077 jklj077 transferred this issue from QwenLM/Qwen May 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant