-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup MKL computation backend for Pytorch XPU operators (Linear Algebra) and enable aten::fft_c2c #526
base: main
Are you sure you want to change the base?
Conversation
ee57739
to
797a737
Compare
72a5828
to
d1a5593
Compare
405b91e
to
290e895
Compare
2e37c81
to
fc60425
Compare
fc60425
to
4b280d7
Compare
22aa279
to
c94442e
Compare
a3b491a
to
8b2f1a7
Compare
8b2f1a7
to
8f8096a
Compare
8f8096a
to
430d690
Compare
430d690
to
799378b
Compare
87c07e0
to
bb01a78
Compare
bb01a78
to
f1204c4
Compare
b8e82ed
to
1328f9a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add mkl source in https://github.com/intel/torch-xpu-ops/blob/main/.github/scripts/env.sh, and enable the mkl into the build and test
5671162
to
69f642b
Compare
MKL source has been added to CI. |
New environment variable |
86bbdfb
to
8f44412
Compare
8f44412
to
65a0071
Compare
The first PR of oneMKL for Pytorch XPU.
Enable first oneMKL Op
fft_c2c
.Add environment variable
USE_ONEMKL
to control whether to build with oneMKL XPU or not.TODO: #737 align claimed fft data type with CUDA in backward test.