-
-
Notifications
You must be signed in to change notification settings - Fork 11k
[DCP] Support Decode Context Parallel (DCP) for GQA with Flashinfer #25438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces Decode Context Parallel (DCP) support for Grouped-Query Attention (GQA) with the FlashInfer backend, which is a valuable enhancement for distributed inference performance. The changes are comprehensive, covering configuration validation, modifications to the attention backend to support DCP-specific logic like query head gathering and LSE-based output correction, and the implementation of a custom attention mask for prefills. The addition of tests for a GQA model using the new functionality is also a great inclusion. The overall implementation is well-executed. I have a couple of suggestions to enhance code quality by addressing a dynamically assigned attribute and removing duplicated code.
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
540c862 to
b9e9b41
Compare
| continue | ||
| K = ((rightmost - r) // p) + 1 | ||
| j = torch.arange(K) | ||
| t = torch.arange(Q) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we generally avoid single character variable names; theyre ok though if there is supporting comment, can you please add comments explaining what the mask looks like and how it is constructed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your review. We have added the comment about mask examples and algorithm explanation after vectorized improvements.
| torch.int64).tolist() | ||
| r = self.dcp_rank | ||
| p = self.dcp_world_size | ||
| for i in range(num_prefills): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: is there a way we can vectorize this loop or replace it with a triton kernel? ideally we avoid python loops as they can be very slow and create GPU bubbles
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your valuable review. We have vectorized the "num_prefills" loop to avoid GPU bubbles. Looking forward to your further review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if self.dcp_world_size > 1:
# init custom mask for interleave kv cache
# |-------total_lens----------|
# |--context_lens--|--q_lens--|
# Example: dcp_size=2, dcp_rank=0
# For a SINGLE prefill seq, q_lens=3, total_lens=5
# k_lens on RANK1 is (5 - 1 - 0) // 2 + 1 = 3
# mask.shape = [q_lens, k_lens] = [3,3]
# mask [[True, True, False],
# [True, True, False],
# [True, True, True]]
dcp_rank = self.dcp_rank
dcp_size = self.dcp_world_size
q_lens = (qo_indptr_cpu[1:] - qo_indptr_cpu[:-1]).to(
dtype=torch.int64, device=self.device)
total_lens = seq_lens_cpu[prefill_start:prefill_start +
num_prefills].to(dtype=torch.int64,
device=self.device)
context_lens = total_lens - q_lens
# max indices for global sequences
max_indices = total_lens - 1
# if max_indices are smaller than dcp_rank,
# current rank has no kv cache, is invalid,
# the mask is skipped
valid = (max_indices >= dcp_rank)
assert torch.any(valid), "There is no valid sequence"
# local kv lens on current dcp_rank
k_lens = torch.div(max_indices - dcp_rank,
dcp_size,
rounding_mode="floor") + 1
k_lens = torch.where(
valid,
k_lens,
torch.zeros_like(k_lens))
# vectorize operation
# obtain the max length of all prefill reqs
max_q = int(q_lens[valid].max().item())
max_k = int(k_lens[valid].max().item())
# generate local q and k indices
q_indices = torch.arange(max_q, device=self.device)
k_indices = torch.arange(max_k, device=self.device)
# valid q and k indices of each reqs
valid_q = valid[:, None] & \
(q_indices[None, :] < q_lens[:, None])
valid_k = valid[:, None] & \
(k_indices[None, :] < k_lens[:, None])
# where global q_indices >= global k_indices,
# the mask is True
# global q_indices = context_lens + local q_indices
# global k_indices = local k_indcies * dcp_size + dcp_rank
# ====> local k_indcies must be smaller or equal k_upper
# k_upper=(context_lens + local q_indices - dcp_rank) // dcp_size
k_upper = torch.div(
context_lens[:, None] + q_indices - dcp_rank,
dcp_size, rounding_mode="floor")
k_upper = torch.where(
valid_q,
torch.clamp(k_upper, min=-1),
k_upper.new_full(k_upper.shape, -1))
mask = (k_indices[None, None, :] <= k_upper[:, :, None]) \
& (k_upper[:, :, None] >= 0)
valid_positions = valid_q[:, :, None] & valid_k[:, None, :]
# flashinfer backend needs flattened format
custom_mask = torch.masked_select(mask, valid_positions)
|
Apologies for the delayed review! left a couple nits; overall its looking pretty good though |
|
This pull request has merge conflicts that must be resolved before it can be |
5a107b7 to
b163b5b
Compare
Signed-off-by: gaojc <1055866782@qq.com>
|
Hi @LucasWilkinson . Could you re-review this PR and give the final sign off ? Thanks! |
|
This pull request has merge conflicts that must be resolved before it can be |
e7f08ec to
8b7e0ed
Compare
Signed-off-by: Jingchun Gao <63247409+gjc0824@users.noreply.github.com>
8b7e0ed to
bff3cda
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the delay! Overall looks pretty good so far but I think we should land #26696 first (seems more important and this can build on that), thoughts?
|
|
||
| self.num_qo_heads = self.model_config.get_num_attention_heads( | ||
| self.vllm_config.parallel_config | ||
| try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see: #26696 (comment)
| block_table_tensor = common_attn_metadata.block_table_tensor | ||
|
|
||
| if self.dcp_world_size > 1: | ||
| seq_lens_np = seq_lens_np // self.dcp_world_size + ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we land #26696 first and then update this to use the dcp_local_seq_lens computed in the model runner?
| if self.dcp_world_size > 1: | ||
| prefill_query = get_dcp_group().all_gather( | ||
| prefill_query.contiguous(), dim=1 | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I guess this is fine but I guess the name "decode context parallel" is falling apart a bit here 😞
| ], | ||
| "bigcode/gpt_bigcode-santacoder": [ | ||
| CPTestSettings.detailed(), | ||
| CPTestSettings.detailed(tp_base=2), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to keep the default backend for CI.
| class CPTestOptions(NamedTuple): | ||
| multi_node_only: bool | ||
| load_format: str | None = None | ||
| attn_backend: str = "FLASH_ATTN" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MLA can't use "FLASH_ATTN" backend, so the default value should not be set.
Purpose
This PR adds Decode Context Parallel (DCP) support for GQA follwing PR #23734 and PR #24864. Current implementation based on FlashInfer Attention.
FlashInfer inserts the current query KV into the cache before computation. Each query then attends to both its own KV and the context KV on the local device, with LSE applied to correct the attention outputs.
Test Plan
Qwen/Qwen3-235B-A22B
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.