-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[Model][Speculative Decoding] support k > 1 for MTP #13805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
This pull request has merge conflicts that must be resolved before it can be |
16b147c to
49cddd4
Compare
Signed-off-by: Lu Fang <fanglu@fb.com>
49cddd4 to
36fc1ec
Compare
|
Hi @luccafong , could you take a look at #13626 ? I would appreciate if you could highlight any key differences between this implementation and the existing PR. |
Hi @benchislett , so I think #13626 is targeting on using EAGLE style to run forward on the same module, while this PR tartgeting on prediction for running forward on k MTP modules (k > 1) as described by the paper, so they are quite different |
|
I hope that we can orchestrate compatibility between these two features in the future so that either one is possible. I think there is a lot of overlap between the contributions of each feature. |
|
cc @LiuXiaoxuanPKU for early review. will need more cleanup and benchmark for publishing |
|
|
||
| # Prepare inputs for the next step | ||
| if step != num_steps - 1: | ||
| if step != num_steps - 1 and not self.mtp: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the multi-step logic omitted here, and self.mtp is just using TP1DraftModelRunner in is_fallback mode?
| outputs.append(output) | ||
|
|
||
| if model_input.attn_metadata.num_prefills == 0 \ | ||
| if not self.mtp and model_input.attn_metadata.num_prefills == 0 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this block skipped?
|
This pull request has merge conflicts that must be resolved before it can be |
No description provided.