Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test infrastructure: fft: Align claimed data type with CUDA in backward test #737

Open
fengyuan14 opened this issue Aug 9, 2024 · 0 comments
Assignees
Labels
Milestone

Comments

@fengyuan14
Copy link
Contributor

🚀 The feature, motivation and pitch

XPU data types supported at native are more than CUDA. Like we always support BF16, but CUDA in some operators doesn't support BF16. So we have got a handle in test infrastructure to add BF16 to our claimed data types,
backward_dtypesIfXPU = backward_dtypesIfCUDA + bfloat16.

But in fft operators, we should claim not supporting BF16. The existing assumption in test infrastructure leads to some fft unit test failures,

    "test_dtypes_fft_fft2_xpu",
    "test_dtypes_fft_fft_xpu",
    "test_dtypes_fft_fftn_xpu",
    "test_dtypes_fft_hfft2_xpu",
    "test_dtypes_fft_hfft_xpu",
    "test_dtypes_fft_hfftn_xpu",
    "test_dtypes_fft_ifft2_xpu",
    "test_dtypes_fft_ifft_xpu",
    "test_dtypes_fft_ifftn_xpu",
    "test_dtypes_fft_ihfft2_xpu",
    "test_dtypes_fft_ihfft_xpu",
    "test_dtypes_fft_ihfftn_xpu",
    "test_dtypes_fft_irfft2_xpu",
    "test_dtypes_fft_irfft_xpu",
    "test_dtypes_fft_irfftn_xpu",
    "test_dtypes_fft_rfft2_xpu",
    "test_dtypes_fft_rfft_xpu",
    "test_dtypes_fft_rfftn_xpu",

Alternatives

No response

Additional context

No response

@fengyuan14 fengyuan14 added enhancement New feature or request feature labels Aug 9, 2024
@fengyuan14 fengyuan14 added this to the PT2.6 milestone Aug 9, 2024
@riverliuintel riverliuintel modified the milestones: PT2.6, PT2.7 Nov 29, 2024
github-merge-queue bot pushed a commit that referenced this issue Jan 24, 2025
…ra) and enable aten::fft_c2c (#526)

- The first PR of oneMKL for Pytorch XPU.
- Enable first oneMKL Op `fft_c2c`.
- Add environment variable `USE_ONEMKL` to control whether to build with
oneMKL XPU or not.
- HuggingFace GoogleFnet FP32 Training/Inference performance (bs=16) has
been improved by ~2.3x/3.1x for Inductor and ~2.1x/2.6x for Eager on
SPR56c + Max1550.
- TODO: #737 align claimed fft data type with CUDA in backward test.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants