PR #2859: [XLA GPU] Support for mix type gemm bias addition fusion #3505
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR #2859: [XLA GPU] Support for mix type gemm bias addition fusion
Imported from GitHub PR #2859
Considering a gemm with
D = alpha * A * B + beta * C
form, We observed whenA
andC
has different type, in XLA HLO, it will appear as 3 HLO instructions:Add(Convert(Gemm(A, B)), C)
.However cuBLAS and cuBLASLt both supports gemm with different types for A and C. To be more specific, both supports 14 different type combinations
(ComputeType, alpha/beta Type, A/BType, C/DType)
.Therefore we can just create a single custom callGemm(A, B, C)
to fuse these 3 instructions and call corresponding gemm routines. This fusion opportunities is observed in weight gradient accumulation scenario.This PR is intended to add support for these mix type gemm bias addition fusion, even though cuBLAS and cuBLASLt supports 14 combinations right now. XLA does not have support for choosing different compute type and scale type for different A/C type. We are not planning to add support for choosing different compute type and scale type here because we are not sure how that might affect the precision. Therefore, we plan to add support for 2 cases of form
(ComputeType, alpha/beta Type, A/BType, C/DType)
:This works for both legacy cuBLAS and cuBLASLt, and it should provide support for both regular gemm and batched gemm.
Copybara import of the project:
--
5caa8b0 by Shanbin Ke ske@nvidia.com:
init upstream
--
3900444 by Shanbin Ke ske@nvidia.com:
remove commented code
--
adb1e0e by Shanbin Ke ske@nvidia.com:
rename AreTypes* to TypesAre*
--
7f04569 by Shanbin Ke ske@nvidia.com:
remove some empty lines
--
d836d4c by Shanbin Ke ske@nvidia.com:
fix hlo_verifier failing issue
--
7e9f1d6 by Shanbin Ke ske@nvidia.com:
add print in AsBlasDataType
--
108caef by Shanbin Ke ske@nvidia.com:
fix unsupported type issue
--
c7d4862 by Shanbin Ke ske@nvidia.com:
add tests to OSS
--
6480ecb by Shanbin Ke ske@nvidia.com:
add s8 f32 support
--
1dee25f by Shanbin Ke ske@nvidia.com:
enhance compute type choice
--
d5f6fb1 by Shanbin Ke ske@nvidia.com:
guard mix type gemm by xla_gpu_simplify_all_fp_conversions
--
5c853fa by Shanbin Ke ske@nvidia.com:
explicitly set xla_gpu_simplify_all_fp_conversions=true
--
d91c05b by Shanbin Ke ske@nvidia.com:
fix file check
Merging this change closes #2859
FUTURE_COPYBARA_INTEGRATE_REVIEW=#2859 from Cjkkkk:mix_type_gemm d91c05b