Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup MKL computation backend for Pytorch XPU operators (Linear Algebra) and enable aten::fft_c2c #526

Merged
merged 5 commits into from
Jan 24, 2025

Conversation

CuiYifeng
Copy link
Contributor

@CuiYifeng CuiYifeng commented Jul 2, 2024

  • The first PR of oneMKL for Pytorch XPU.
  • Enable first oneMKL Op fft_c2c.
  • Add environment variable USE_ONEMKL to control whether to build with oneMKL XPU or not.
  • HuggingFace GoogleFnet FP32 Training/Inference performance (bs=16) has been improved by ~2.3x/3.1x for Inductor and ~2.1x/2.6x for Eager on SPR56c + Max1550.
  • TODO: test infrastructure: fft: Align claimed data type with CUDA in backward test #737 align claimed fft data type with CUDA in backward test.

@CuiYifeng CuiYifeng force-pushed the yifeng/fft_c2c branch 2 times, most recently from ee57739 to 797a737 Compare July 4, 2024 16:13
@CuiYifeng CuiYifeng force-pushed the yifeng/fft_c2c branch 3 times, most recently from 72a5828 to d1a5593 Compare July 9, 2024 03:12
@fengyuan14 fengyuan14 marked this pull request as draft July 26, 2024 07:41
@CuiYifeng CuiYifeng force-pushed the yifeng/fft_c2c branch 5 times, most recently from 2e37c81 to fc60425 Compare August 2, 2024 12:41
@CuiYifeng CuiYifeng marked this pull request as ready for review August 2, 2024 12:42
@CuiYifeng CuiYifeng requested a review from fengyuan14 August 5, 2024 01:46
@fengyuan14 fengyuan14 changed the title Init MKL for Pytorch XPU and enable fft_c2c Setup MKL computation backend for Pytorch XPU operators (Linear Algebra) and enable aten::fft_c2c Aug 6, 2024
Copy link
Contributor

@chuanqi129 chuanqi129 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add mkl source in https://github.com/intel/torch-xpu-ops/blob/main/.github/scripts/env.sh, and enable the mkl into the build and test

@CuiYifeng CuiYifeng force-pushed the yifeng/fft_c2c branch 2 times, most recently from 5671162 to 69f642b Compare January 10, 2025 02:36
@CuiYifeng CuiYifeng requested a review from xytintel January 10, 2025 02:51
@CuiYifeng
Copy link
Contributor Author

Please add mkl source in https://github.com/intel/torch-xpu-ops/blob/main/.github/scripts/env.sh, and enable the mkl into the build and test

MKL source has been added to CI.

@CuiYifeng
Copy link
Contributor Author

New environment variable USE_ONEMKL is used to control whether to build with ONEMKL XPU or not.

@CuiYifeng CuiYifeng force-pushed the yifeng/fft_c2c branch 3 times, most recently from 86bbdfb to 8f44412 Compare January 13, 2025 07:49
@CuiYifeng CuiYifeng requested a review from fengyuan14 January 14, 2025 02:00
@CuiYifeng CuiYifeng force-pushed the yifeng/fft_c2c branch 2 times, most recently from eb6f1b0 to 95cb7e1 Compare January 22, 2025 15:13
@fengyuan14
Copy link
Contributor

Have to mention, we need unify recommended MKL usage for building and runtime. #1325

@CuiYifeng CuiYifeng added this pull request to the merge queue Jan 24, 2025
Merged via the queue into intel:main with commit c040c37 Jan 24, 2025
7 checks passed
@CuiYifeng CuiYifeng deleted the yifeng/fft_c2c branch January 24, 2025 07:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants