-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
[Spec-decode] Refoctor cudagraphs for spec-decode;support uniform_alignment of cudagraph sizes. #23679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
fhl2000
wants to merge
34
commits into
vllm-project:main
Choose a base branch
from
fhl2000:fix_cudagraph_drafter
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+719
−239
Open
[Spec-decode] Refoctor cudagraphs for spec-decode;support uniform_alignment of cudagraph sizes. #23679
Changes from all commits
Commits
Show all changes
34 commits
Select commit
Hold shift + click to select a range
517b672
fixes and refactors spec-decode cudagraph
fhl2000 40e1ccb
remove build_for_cudagraph_capture
fhl2000 3550717
support capturing mutiple uniform_query_len
fhl2000 a142f14
fix typo
fhl2000 f7d73f8
fix typo
fhl2000 02390fc
fix broken examples/offline_inference/spec_decode.py
fhl2000 198fb66
Merge remote-tracking branch 'origin/main' into fix_cudagraph_drafter
fhl2000 14c6918
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 ec02778
fix pre-commit
fhl2000 6b90770
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 286677f
revert spec_decode.py
fhl2000 874639c
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 0eda111
address comments
fhl2000 9c50e6e
revert build_for_cudagraph_capturing
fhl2000 e4a1a78
remove unnecessary assertion
fhl2000 ce32326
solving conflicts/Merge remote-tracking branch 'origin/main' into fix…
fhl2000 ad5ba70
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 691c21e
fixes for ubatching
fhl2000 43b2753
fix CI
fhl2000 fde10ba
Merge remote-tracking branch 'origin/main' into fix_cudagraph_drafter
fhl2000 0a3fe05
fix
fhl2000 804598b
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 40bd81b
Merge remote-tracking branch 'origin/main' into fix_cudagraph_drafter
fhl2000 a51344e
Merge remote-tracking branch 'origin/main' into fix_cudagraph_drafter
fhl2000 d170341
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 0ee4aef
WIP:address dp padding issue
fhl2000 a4872bc
clean up
fhl2000 872015e
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 d1499c2
Merge remote-tracking branch 'origin/main' into fix_cudagraph_drafter
fhl2000 c18486a
refactor eagle dummy run
fhl2000 9b99056
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 299ce7d
fix drafter when enforce_eager
fhl2000 b5c315a
fix pre-commit
fhl2000 25d3f3b
Merge branch 'main' into fix_cudagraph_drafter
fhl2000 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologize for this partially true statement. Just found that when max_model_len is extremely large, FA2 is slow at capturing FULL, while Triton attention is just as fast as when the max_model_len is small. So Triton attention is ok with removing this function, but we should do something else to make FA2 avoid slowing down when capturing FULL with extremely large max_model_len. Not sure if the same situation happened for some other attention backends.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FlashInfer backend is also slow at capturing FULL when max_model_len is large.