Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR #2859: [XLA GPU] Support for mix type gemm bias addition fusion #3505

Closed
wants to merge 0 commits into from

Conversation

copybara-service[bot]
Copy link

PR #2859: [XLA GPU] Support for mix type gemm bias addition fusion

Imported from GitHub PR #2859

Considering a gemm with D = alpha * A * B + beta * C form, We observed when A and C has different type, in XLA HLO, it will appear as 3 HLO instructions: Add(Convert(Gemm(A, B)), C).

However cuBLAS and cuBLASLt both supports gemm with different types for A and C. To be more specific, both supports 14 different type combinations (ComputeType, alpha/beta Type, A/BType, C/DType) .Therefore we can just create a single custom call Gemm(A, B, C) to fuse these 3 instructions and call corresponding gemm routines. This fusion opportunities is observed in weight gradient accumulation scenario.

This PR is intended to add support for these mix type gemm bias addition fusion, even though cuBLAS and cuBLASLt supports 14 combinations right now. XLA does not have support for choosing different compute type and scale type for different A/C type. We are not planning to add support for choosing different compute type and scale type here because we are not sure how that might affect the precision. Therefore, we plan to add support for 2 cases of form (ComputeType, alpha/beta Type, A/BType, C/DType):

  1. (fp32, fp32, fp16, fp32)
  2. (fp32, fp32, bf16, fp32)
  3. (fp32, fp32, s8, fp32)

This works for both legacy cuBLAS and cuBLASLt, and it should provide support for both regular gemm and batched gemm.
Copybara import of the project:

--
5caa8b0 by Shanbin Ke ske@nvidia.com:

init upstream

--
3900444 by Shanbin Ke ske@nvidia.com:

remove commented code

--
adb1e0e by Shanbin Ke ske@nvidia.com:

rename AreTypes* to TypesAre*

--
7f04569 by Shanbin Ke ske@nvidia.com:

remove some empty lines

--
d836d4c by Shanbin Ke ske@nvidia.com:

fix hlo_verifier failing issue

--
7e9f1d6 by Shanbin Ke ske@nvidia.com:

add print in AsBlasDataType

--
108caef by Shanbin Ke ske@nvidia.com:

fix unsupported type issue

--
c7d4862 by Shanbin Ke ske@nvidia.com:

add tests to OSS

--
6480ecb by Shanbin Ke ske@nvidia.com:

add s8 f32 support

--
1dee25f by Shanbin Ke ske@nvidia.com:

enhance compute type choice

--
d5f6fb1 by Shanbin Ke ske@nvidia.com:

guard mix type gemm by xla_gpu_simplify_all_fp_conversions

--
5c853fa by Shanbin Ke ske@nvidia.com:

explicitly set xla_gpu_simplify_all_fp_conversions=true

--
d91c05b by Shanbin Ke ske@nvidia.com:

fix file check

Merging this change closes #2859

FUTURE_COPYBARA_INTEGRATE_REVIEW=#2859 from Cjkkkk:mix_type_gemm d91c05b

@copybara-service copybara-service bot force-pushed the test_539196462 branch 2 times, most recently from f814a7a to 10ace82 Compare June 13, 2023 02:38
@copybara-service copybara-service bot closed this Jun 13, 2023
@copybara-service copybara-service bot deleted the test_539196462 branch June 13, 2023 03:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants