-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[webgpu] Optimize MatMulNBits for f16 Block32 prefill performance #23908
base: main
Are you sure you want to change the base?
Conversation
Tests: model_benchmark.exe -i Phi-3.5-mini-instruct-onnx-web -l 1000
|
@qjia7 @sushraja-msft @jchen10 |
Add shader for easy review.
|
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline |
Azure Pipelines successfully started running 2 pipeline(s). |
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline |
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models |
/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI |
Azure Pipelines successfully started running 4 pipeline(s). |
Azure Pipelines successfully started running 9 pipeline(s). |
Azure Pipelines successfully started running 4 pipeline(s). |
@@ -867,6 +978,40 @@ Status MatMulNBits::ComputeInternal(onnxruntime::webgpu::ComputeContext& context | |||
return context.RunProgram(mul_program); | |||
} | |||
|
|||
// Block32 prefill program | |||
// This program is optimized for Block32 prefill using Tile16x128. | |||
const bool use_block32_program = block_size == 32 && batch_count == 1 && !has_zero_points && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what was the impact to generation speed ? Should you restrict this shader with a M > kMinMForTileOptimization check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you also support batch size other than 1 and zero points, in your shader perhaps relax that check. Okay to do in a follow up change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what was the impact to generation speed ? Should you restrict this shader with a M > kMinMForTileOptimization check.
Thanks for the review.
The performance is similar with default shader when M ==1.
I only see performance improvement when M is greater than 2, according to the test results.
I will implement a restriction, M > kMinMForTileOptimization, to enforce this requirement."
// This program is optimized for Block32 prefill using Tile16x128. | ||
const bool use_block32_program = block_size == 32 && batch_count == 1 && !has_zero_points && | ||
components_a == 4 && components_b == 4 && M > 1 && | ||
context.AdapterInfo().vendor == std::string_view{"intel"}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about this shader makes it intel specific, can we remove this check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only got intel devices to test the performance at the moments.
After verification on broaden devices, it's surely can be removed in a follow up change.
@@ -867,6 +978,40 @@ Status MatMulNBits::ComputeInternal(onnxruntime::webgpu::ComputeContext& context | |||
return context.RunProgram(mul_program); | |||
} | |||
|
|||
// Block32 prefill program | |||
// This program is optimized for Block32 prefill using Tile16x128. | |||
const bool use_block32_program = block_size == 32 && batch_count == 1 && !has_zero_points && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you also support batch size other than 1 and zero points, in your shader perhaps relax that check. Okay to do in a follow up change.
@@ -781,6 +781,117 @@ Status DP4AMatMulNBitsProgram::GenerateShaderCode(ShaderHelper& shader) const { | |||
return Status::OK(); | |||
} | |||
|
|||
Status MatMulNBitsBlock32Program::GenerateShaderCode(ShaderHelper& shader) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for working on this, some thoughts. Your shader looks to be a generation optimized shader with different tile size than the current one. As far as matmul goes there are 2 genres of shaders with each genre having variants for special ops they use.
Generation Optimized Shaders - these will keep only A in shared memory - pool all threads to load A into shared memory and then have each thread work on a B from that A.
Prefill Optimization Shaders - These should use co-operative matmul - https://www.khronos.org/assets/uploads/developers/presentations/Cooperative_Matrix_May22.pdf
They keep both a and b in shared memory. Pool all threads to load shared memory and then each subgroup within the workgroup works on a subtile. This results in parts of the loads required for a subtile to be shared with other subtiles and hence saves loads.
From what I can tell yours is a generation mode shader, if you are seeing good perf with this tile size -we should just replace the current generation shader with yours. Even better if we can make these shaders have the tile sizes as tunable.
Net, I think we should try to avoid having similar shaders that don't differ algorithmically. Please do share numbers for generation perf with your shader, perhaps we can replace the current generation shader with yours.
As to why you are seeing great prefill speed, its because our prefill fp16 shader is not based on co-operative matmul (we havent got around to rewriting that shader that way, if you can pick that up that would be amazing as well). The DP4A matmul shader is using techniques of co-operative matmul, and we are using that for many models by passing accuracy_level 4 with model_builder.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I can tell yours is a generation mode shader, if you are seeing good perf with this tile size -we should just replace the current generation shader with yours. Even better if we can make these shaders have the tile sizes as tunable.
This shader optimizes Input_A loading by leveraging workgroup-wide collective load operations within a workgroup(128), storing a 16x8 tile into shared memory with a single instruction.
This approach increases tiling size, resulting in performance improvement when the input matrix 'M' is sufficiently large.
Specifically, for M=1, the performance does not exceed that of the default decode shader.
I'm trying to avoid making too many modifications in a single PR to keep it easier review, and comparable with previous shader. What are your thoughts? |
Yes, we observed quite good performance at accuracy level 4 using the DP4A shader. I'll investigate similar for f16. |
Description
This commit improve the MatMulNBits f16 Block32 prefill performance, by increasing tiling size and enhancing memory efficiency. Achieved a +2x performance boost on Intel iGPUs for Phi-3.5-mini f16 model.
Motivation and Context
See above.