Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

erfcx_xpu and ndtri_xpu not implemented for 'BFloat16' #1338

Open
huaiyuzh opened this issue Feb 6, 2025 · 1 comment
Open

erfcx_xpu and ndtri_xpu not implemented for 'BFloat16' #1338

huaiyuzh opened this issue Feb 6, 2025 · 1 comment
Assignees
Milestone

Comments

@huaiyuzh
Copy link
Contributor

huaiyuzh commented Feb 6, 2025

erfcx_xpu and ndtri_xpu cause an IPEX UT fail. In IPEX2.6, we override this Ops with IPEX implementation to make this UT pass.
ipex/tests/gpu/example/test_special_ops.py::TestTorchMethod::test_erfcx - RuntimeError: "erfcx_xpu" not implemented for 'BFloat16'
ipex/tests/gpu/example/test_special_ops.py::TestTorchMethod::test_ndtri_entr - RuntimeError: "ndtri_xpu" not implemented for 'BFloat16' True

@huaiyuzh huaiyuzh assigned huaiyuzh and xytintel and unassigned huaiyuzh Feb 6, 2025
@daisyden daisyden added this to the PT2.7 milestone Feb 20, 2025
@xytintel xytintel assigned chunhuanMeng and unassigned xytintel Feb 25, 2025
@chunhuanMeng
Copy link
Contributor

chunhuanMeng commented Feb 25, 2025

This is because ipex adds support for BFloat16 data type for these two ops, but torch-xpu-ops does not have this support, and stock pytorch also does not have this support. Therefore, if you really need it, you can raise a PR in pytorch, and then torch-xpu-ops will make corresponding changes. We ensure that our code is consistent with the design of stock pytorch.
ref link:
torch-xpu-ops:

AT_DISPATCH_FLOATING_TYPES(iter.common_dtype(), "erfcx_xpu", [&]() {

AT_DISPATCH_FLOATING_TYPES(iter.common_dtype(), "ndtri_xpu", [&]() {

cuda:
https://github.com/pytorch/pytorch/blob/bb7e8fbd668c7c8931436b4a935b26911cbe0daf/aten/src/ATen/native/cuda/UnarySpecialOpsKernel.cu#L310;
https://github.com/pytorch/pytorch/blob/bb7e8fbd668c7c8931436b4a935b26911cbe0daf/aten/src/ATen/native/cuda/UnarySpecialOpsKernel.cu#L230
ipex:
https://github.com/intel/intel-extension-for-pytorch/blob/release/xpu/2.6.10/csrc/gpu/aten/operators/SpecialOps.cpp#L52;
https://github.com/intel/intel-extension-for-pytorch/blob/release/xpu/2.6.10/csrc/gpu/aten/operators/SpecialOps.cpp#L65

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants