You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
XPU data types supported at native are more than CUDA. Like we always support BF16, but CUDA in some operators doesn't support BF16. So we have got a handle in test infrastructure to add BF16 to our claimed data types, backward_dtypesIfXPU = backward_dtypesIfCUDA + bfloat16.
But in fft operators, we should claim not supporting BF16. The existing assumption in test infrastructure leads to some fft unit test failures,
…ra) and enable aten::fft_c2c (#526)
- The first PR of oneMKL for Pytorch XPU.
- Enable first oneMKL Op `fft_c2c`.
- Add environment variable `USE_ONEMKL` to control whether to build with
oneMKL XPU or not.
- HuggingFace GoogleFnet FP32 Training/Inference performance (bs=16) has
been improved by ~2.3x/3.1x for Inductor and ~2.1x/2.6x for Eager on
SPR56c + Max1550.
- TODO: #737 align claimed fft data type with CUDA in backward test.
🚀 The feature, motivation and pitch
XPU data types supported at native are more than CUDA. Like we always support BF16, but CUDA in some operators doesn't support BF16. So we have got a handle in test infrastructure to add BF16 to our claimed data types,
backward_dtypesIfXPU = backward_dtypesIfCUDA + bfloat16
.But in fft operators, we should claim not supporting BF16. The existing assumption in test infrastructure leads to some fft unit test failures,
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: