-
Notifications
You must be signed in to change notification settings - Fork 769
[SYCL][CUDA] Add bf16 builtins operating on storage types #5748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes in llvm intrinsics LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RT changes LGTM.
@t4c1 , should there be a test for this change?
Which change do you have in mind? There are some test being added to the test suite (linked in PR description). Or do you mean something else needs testing? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RT changes LGTM
@t4c1 , sorry, didn't notice that at first glance. Seem like that'll do. |
I had to merge with the sycl branch to resolve the conflict with 53a9d54. |
Thanks. |
@t4c1, could you fix these warnings, please?
|
This PR introduces full support of element wise operations in the cuda backend. `wi_data`, `get_matrix_fill`, and `joint_matrix.get_wi_data()` are introduced for portability with the Intel backend. In addition, in the CUDA backend users can call `joint_matrix.wi_marray` to access the marray that stores the WI owned elements of the matrix and perform optimized element wise operations using math functions that take marrays. bfloat16 element wise operations support is also included and this PR adds bfloat16 scalar/marray impls replacing the existing uint16_t "storage type" implementations for fma, fmax, fmin, and fabs math functions. The bfloat16 fma_relu function impl has now been added directly in #5749. The existing temporary uint16_t implementations (introduced in #5748 with unmerged tests intel/llvm-test-suite#897) have been removed, since these bfloat16 implementations replaces them. Signed-off-by: jack.kirk <jack.kirk@codeplay.com>
Add bf16 builtins operating on storage types. Partially implements https://github.com/intel/llvm/pull/5645/files for CUDA (only operations on storage types).
This PR includes a bugfix for some NVPTX intrinsics, which will also be pushed upstream.
Blocked by #5724.
Tests for this are in intel/llvm-test-suite#897.