-
Notifications
You must be signed in to change notification settings - Fork 538
[BugFix] Fix Qwen3-next break #3428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
|
needs #3719 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a bug with the Qwen3-next model on vLLM. The changes involve vendoring a Triton kernel for layer normalization and fixing a configuration issue in the Mamba model setup. The fixes appear to be correct and address the immediate problem. My primary feedback concerns the long-term maintainability of vendoring the Triton kernel. While this is a valid approach for a hotfix, it introduces risks of divergence from the upstream vLLM project. I've provided a comment with a suggestion to mitigate this risk.
|
Incompatible with upstream changes because ''torch_npu._C._NPUDeviceProperties' object has no attribute 'multi_processor_count' |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
b90d222 to
ecc7122
Compare
a139175 to
9757e24
Compare
|
Test Results: It seems there is a accuracy problem |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
This have been sovled by 6c65dd8 |
Signed-off-by: Icey <1790571317@qq.com>
41a63f6 to
917315e
Compare
has been solved by #3549 |
Signed-off-by: Icey <1790571317@qq.com>
What this PR does / why we need it?
Fix Qwen3NextGatedDeltaNet, caused by vllm-project/vllm#26437
Does this PR introduce any user-facing change?
N/A
How was this patch tested?