-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[CPU]Improve cpu fused moe perf #27244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add oneDNN/ACL matmul path for AArch64 - Use silu_and_mul Op Signed-off-by: Zhang Xiangze <Xiangze.Zhang@arm.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request improves CPU performance for fused MoE layers by introducing a oneDNN/ACL matmul path for AArch64 architectures and leveraging a custom silu_and_mul operator. The changes are well-implemented, preparing oneDNN matmul handlers during initialization and using them in the forward pass. While this is a solid performance enhancement, I've identified a high-severity issue concerning model serialization. The created oneDNN handlers are not serializable, and attaching them to the model can lead to crashes upon saving and reloading the model.
| gate_up_handle = ops.create_onednn_mm(layer_w13_weight.t(), 32) | ||
| layer.gate_up_linear.append( | ||
| lambda x, handle=gate_up_handle, bias=layer_w13_bias: ops.onednn_mm( | ||
| handle, x, bias | ||
| ) | ||
| ) | ||
| down_handle = ops.create_onednn_mm(layer_w2_weight.t(), 32) | ||
| layer.down_linear.append( | ||
| lambda x, handle=down_handle, bias=layer_w2_bias: ops.onednn_mm( | ||
| handle, x, bias | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CPUDNNLGEMMHandler objects created by ops.create_onednn_mm contain pointers to C++ state and are not serializable. Storing lambdas that capture these handlers in layer.gate_up_linear and layer.down_linear will cause issues if the model is serialized (e.g., with pickle or torch.save). Upon deserialization, the handler pointers will be invalid, which can lead to segmentation faults when the model is used or garbage collected.
To prevent this, the CPUDNNLGEMMHandler class should be made non-picklable by implementing __getstate__ to raise an exception. Since that class is not in this file, an alternative is to avoid storing these handlers on the torch.nn.Module instance if model serialization is a possibility. If serialization is not a supported use case for CPU-based models, this might be acceptable, but it's a significant risk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Signed-off-by: Zhang Xiangze <Xiangze.Zhang@arm.com>
|
I saw vllm crash when running unit test locally with It seems the issue is not directly related with this PR. But the moe test exposed an issue in onednn MatMul cache. The issue: The tests passed when I tested bf16 and fp32 separately. |
|
@mgoin @bigPYJ1151 Can you help to review this PR? |
|
Hi @xiangze-arm - can you test again with #27472? |
|
I have tested this PR with #27472 fix. The unit test passed without crash. PR description is also updated with performance result (1.6x throughput improvement). |
|
@bigPYJ1151 could you have a look at this please? |
Signed-off-by: Zhang Xiangze <Xiangze.Zhang@arm.com>
|
I benchmarked the performance impact of silu_and_mul change, and it is small(~ 3% overall improvement). So I have removed silu_and_mul change due to concerns. This PR only contains the feature of adding new oneDNN/ACL path in cpu fused moe now. |
|
Thank you, I wasn't really asking for it to be removed, just for more insights about why we chose to use it. |
|
@bigPYJ1151 could you please help review this change? |
|
Ah, those CI failures are unrelated. |
Signed-off-by: Zhang Xiangze <Xiangze.Zhang@arm.com>
Description
Test Plan
pytest tests/kernels/moe/test_moe.py -k test_cpu_fused_moe_basicTest Result
Performance
With this PR, MoE can go into onednn_mm path on AArch64 CPU. On 32 Neoverse-N2 cores, this PR gets about 1.6x throughput compared with current default path.
Bench command: