-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup MKL computation backend for Pytorch XPU operators (Linear Algebra) and enable aten::fft_c2c #526
Conversation
ee57739
to
797a737
Compare
72a5828
to
d1a5593
Compare
405b91e
to
290e895
Compare
2e37c81
to
fc60425
Compare
fc60425
to
4b280d7
Compare
22aa279
to
c94442e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add mkl source in https://github.com/intel/torch-xpu-ops/blob/main/.github/scripts/env.sh, and enable the mkl into the build and test
5671162
to
69f642b
Compare
MKL source has been added to CI. |
New environment variable |
86bbdfb
to
8f44412
Compare
8f44412
to
65a0071
Compare
eb6f1b0
to
95cb7e1
Compare
95cb7e1
to
fdfbbff
Compare
28ff5b8
to
248a3e8
Compare
Have to mention, we need unify recommended MKL usage for building and runtime. #1325 |
fft_c2c
.USE_ONEMKL
to control whether to build with oneMKL XPU or not.