-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[Bugfix][WideEP] Apply TP Attn + EP MoE fix to other models #24982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
tlrmchlsmth
merged 39 commits into
vllm-project:main
from
tlrmchlsmth:tp_attn_fix_more_models
Sep 27, 2025
Merged
[Bugfix][WideEP] Apply TP Attn + EP MoE fix to other models #24982
tlrmchlsmth
merged 39 commits into
vllm-project:main
from
tlrmchlsmth:tp_attn_fix_more_models
Sep 27, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Runs but wrong answer in this case Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
tlrmchlsmth
pushed a commit
that referenced
this pull request
Sep 28, 2025
simon-mo
pushed a commit
that referenced
this pull request
Sep 28, 2025
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com>
simon-mo
pushed a commit
that referenced
this pull request
Sep 28, 2025
This was referenced Sep 28, 2025
baonudesifeizhai
pushed a commit
to baonudesifeizhai/vllm
that referenced
this pull request
Sep 28, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io> Signed-off-by: baonudesifeizhai <baonudesifeizhai@gmail.com>
xuechendi
pushed a commit
to vllm-project/vllm-gaudi
that referenced
this pull request
Sep 30, 2025
After vllm-project/vllm#24982 merged, sequence parallel MOE will be turned on when `enable_expert_parallel=True`, `tp_size > 1` and `dp_size > 1`. Since for Gaudi, there is no choice for `VLLM_ALL2ALL_BACKEND`, we can not easily bypass it. So this PR aims to support the feature. ```python class ParallelConfig: @Property def use_sequence_parallel_moe(self) -> bool: return (envs.VLLM_ALL2ALL_BACKEND in ("allgather_reducescatter", "naive", "deepep_high_throughput", "deepep_low_latency") and self.enable_expert_parallel and self.tensor_parallel_size > 1 and self.data_parallel_size > 1) ``` Update: No hard requirement on vllm-project/vllm#25828 --------- Signed-off-by: Wuxun Zhang <wuxun.zhang@intel.com>
5 tasks
iboiko-habana
pushed a commit
to iboiko-habana/vllm-gaudi
that referenced
this pull request
Oct 2, 2025
After vllm-project/vllm#24982 merged, sequence parallel MOE will be turned on when `enable_expert_parallel=True`, `tp_size > 1` and `dp_size > 1`. Since for Gaudi, there is no choice for `VLLM_ALL2ALL_BACKEND`, we can not easily bypass it. So this PR aims to support the feature. ```python class ParallelConfig: @Property def use_sequence_parallel_moe(self) -> bool: return (envs.VLLM_ALL2ALL_BACKEND in ("allgather_reducescatter", "naive", "deepep_high_throughput", "deepep_low_latency") and self.enable_expert_parallel and self.tensor_parallel_size > 1 and self.data_parallel_size > 1) ``` Update: No hard requirement on vllm-project/vllm#25828 --------- Signed-off-by: Wuxun Zhang <wuxun.zhang@intel.com> Signed-off-by: Iryna Boiko <iboiko@habana.ai>
pdasigi
pushed a commit
to pdasigi/vllm
that referenced
this pull request
Oct 2, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
pdasigi
pushed a commit
to pdasigi/vllm
that referenced
this pull request
Oct 2, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io>
yewentao256
pushed a commit
that referenced
this pull request
Oct 3, 2025
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: yewentao256 <zhyanwentao@126.com>
yewentao256
pushed a commit
that referenced
this pull request
Oct 3, 2025
1 task
xuebwang-amd
pushed a commit
to xuebwang-amd/vllm
that referenced
this pull request
Oct 10, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
xuebwang-amd
pushed a commit
to xuebwang-amd/vllm
that referenced
this pull request
Oct 10, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth
pushed a commit
to Tandemn-Labs/vllm
that referenced
this pull request
Oct 11, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com>
choprahetarth
pushed a commit
to Tandemn-Labs/vllm
that referenced
this pull request
Oct 11, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io> Signed-off-by: simon-mo <simon.mo@hey.com>
shyeh25
pushed a commit
to shyeh25/vllm
that referenced
this pull request
Oct 14, 2025
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com>
shyeh25
pushed a commit
to shyeh25/vllm
that referenced
this pull request
Oct 14, 2025
Signed-off-by: Roger Wang <hey@rogerw.io> Signed-off-by: simon-mo <simon.mo@hey.com>
lywa1998
pushed a commit
to lywa1998/vllm
that referenced
this pull request
Oct 20, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
lywa1998
pushed a commit
to lywa1998/vllm
that referenced
this pull request
Oct 20, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io>
alhridoy
pushed a commit
to alhridoy/vllm
that referenced
this pull request
Oct 24, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
alhridoy
pushed a commit
to alhridoy/vllm
that referenced
this pull request
Oct 24, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io>
xuebwang-amd
pushed a commit
to xuebwang-amd/vllm
that referenced
this pull request
Oct 24, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
xuebwang-amd
pushed a commit
to xuebwang-amd/vllm
that referenced
this pull request
Oct 24, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
5 tasks
5 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
ci/build
deepseek
Related to DeepSeek models
gpt-oss
Related to GPT-OSS models
llama
Related to Llama models
multi-modality
Related to multi-modality (#4194)
qwen
Related to Qwen models
ready
ONLY add when PR is ready to merge/full CI is needed
speculative-decoding
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Purpose
Prior to this PR, in many cases, using TP Attn and EP MoEs with
--tensor-parallel-size N --data-parallel-size M --enable-expert-parallelwould result in factorNredundant work in the MoE layers.This PR extends #24134 to other models, and to the
naiveandallgather_reducescatterAll2All backends.Test Plan
Test Result
Qwen/Qwen3-30B-A3B-FP8:Qwen/Qwen3-Next-80B-A3B-Instruct(with--enforce-eagerdue to #25437):meta-llama/Llama-4-Scout-17B-16E:ibm-granite/granite-4.0-tiny-preview(with--enforce-eagerdue to #25437 (comment)):openai/gpt-oss-20b(main at TP4 is almost the same):Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.