Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add aten::_thnn_fused_gru_cell and _thnn_fused_lstm_cell #926

Open
wants to merge 20 commits into
base: main
Choose a base branch
from

Conversation

yucai-intel
Copy link
Contributor

@yucai-intel yucai-intel commented Sep 20, 2024

  • thnn_fused_gru_cell_forward
  • thnn_fused_gru_cell_backward
  • thnn_fused_lstm_cell_forward
  • thnn_fused_lstm_cell_backward

#include <ATen/core/op_registration/adaption.h>
#include <ATen/native/cpu/mixed_data_type.h>
#include <ATen/native/xpu/sycl/GRUFusedCellKernels.h>
#include <ATen/xpu/XPUNativeFunctions.h>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This header file seems to be non-existent from the build logs.
Would changing to XPUFunctions.h instead of XPUNativeFunctions.h make it work?

#include <ATen/core/op_registration/adaption.h>
#include <ATen/native/cpu/mixed_data_type.h>
#include <ATen/native/xpu/sycl/RNNKernels.h>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#include <ATen/core/op_registration/adaption.h>
#include <ATen/native/cpu/mixed_data_type.h>

please check if we need head files


auto hy = at::empty_like(hx, LEGACY_CONTIGUOUS_MEMORY_FORMAT);

AT_DISPATCH_FLOATING_TYPES_AND2(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How we handle non-contiguous input ? CUDA uses tensorInfo, but we just use raw data pointer, without stride information

const Tensor& hidden_bias =
c10::value_or_else(hidden_bias_opt, [] { return Tensor(); });

auto batched_input = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If hard code true, can we remove it

@yucai-intel yucai-intel changed the title Add aten::_thnn_fused_gru_cell (forward and backward) Add aten::_thnn_fused_gru_cell and _thnn_fused_lstm_cell Nov 8, 2024
@xytintel
Copy link
Contributor

xytintel commented Nov 8, 2024

@yucai-intel Pls show the testcase

@yucai-intel
Copy link
Contributor Author

2024-11-12T10:33:51.5087379Z test_modules_xpu.py::TestModuleXPU::test_device_ctx_init_nn_LSTMCell_xpu_float32 SKIPPED [ 11%]
2024-11-12T10:33:51.5096043Z test_modules_xpu.py::TestModuleXPU::test_device_ctx_init_nn_LSTMCell_xpu_float64 SKIPPED [ 11%]
2024-11-12T10:33:51.6437027Z test_modules_xpu.py::TestModuleXPU::test_errors_nn_LSTMCell_xpu_float32 PASSED [ 15%]
2024-11-12T10:33:51.6453961Z test_modules_xpu.py::TestModuleXPU::test_errors_nn_LSTMCell_xpu_float64 PASSED [ 15%]
2024-11-12T10:33:52.4885672Z test_modules_xpu.py::TestModuleXPU::test_factory_kwargs_nn_LSTMCell_xpu_float32 PASSED [ 20%]
2024-11-12T10:33:52.4915521Z test_modules_xpu.py::TestModuleXPU::test_factory_kwargs_nn_LSTMCell_xpu_float64 PASSED [ 20%]
2024-11-12T10:34:10.7741661Z test_modules_xpu.py::TestModuleXPU::test_if_train_and_eval_modes_differ_nn_LSTMCell_xpu_float32 PASSED [ 40%]
2024-11-12T10:34:12.6997955Z test_modules_xpu.py::TestModuleXPU::test_memory_format_nn_LSTMCell_xpu_float32 PASSED [ 47%]
2024-11-12T10:34:23.6199228Z test_modules_xpu.py::TestModuleXPU::test_non_contiguous_tensors_nn_LSTMCell_xpu_float32 PASSED [ 61%]
2024-11-12T10:34:42.6044761Z test_modules_xpu.py::TestModuleXPU::test_repr_nn_LSTMCell_xpu_float32 PASSED [ 69%]
2024-11-12T10:34:42.6059674Z test_modules_xpu.py::TestModuleXPU::test_repr_nn_LSTMCell_xpu_float64 PASSED [ 69%]
2024-11-12T10:34:45.4856338Z test_modules_xpu.py::TestModuleXPU::test_save_load_nn_LSTMCell_xpu_float32 PASSED [ 76%]
2024-11-12T10:34:53.0802266Z test_modules_xpu.py::TestModuleXPU::test_to_empty_nn_LSTMCell_swap_False_xpu_float32 PASSED [ 83%]
2024-11-12T10:34:53.0821022Z test_modules_xpu.py::TestModuleXPU::test_to_empty_nn_LSTMCell_swap_True_xpu_float32 PASSED [ 83%]
2024-11-12T10:34:55.1033801Z test_modules_xpu.py::TestModuleXPU::test_to_nn_LSTMCell_swap_False_set_grad_False_xpu_float32 PASSED [ 93%]
2024-11-12T10:34:55.1053526Z test_modules_xpu.py::TestModuleXPU::test_to_nn_LSTMCell_swap_False_set_grad_True_xpu_float32 PASSED [ 93%]
2024-11-12T10:34:55.1098168Z test_modules_xpu.py::TestModuleXPU::test_to_nn_LSTMCell_swap_True_set_grad_False_xpu_float32 PASSED [ 93%]

@yucai-intel
Copy link
Contributor Author

2024-11-12T10:33:51.6360467Z test_modules_xpu.py::TestModuleXPU::test_errors_nn_GRUCell_xpu_float32 PASSED [ 15%]
2024-11-12T10:33:51.6379519Z test_modules_xpu.py::TestModuleXPU::test_errors_nn_GRUCell_xpu_float64 PASSED [ 15%]
2024-11-12T10:33:52.2665280Z test_modules_xpu.py::TestModuleXPU::test_factory_kwargs_nn_GRUCell_xpu_float32 PASSED [ 18%]
2024-11-12T10:33:52.2694063Z test_modules_xpu.py::TestModuleXPU::test_factory_kwargs_nn_GRUCell_xpu_float64 PASSED [ 18%]
2024-11-12T10:34:42.5242041Z test_modules_xpu.py::TestModuleXPU::test_repr_nn_GRUCell_xpu_float32 PASSED [ 67%]
2024-11-12T10:34:42.5256334Z test_modules_xpu.py::TestModuleXPU::test_repr_nn_GRUCell_xpu_float64 PASSED [ 68%]
2024-11-12T10:34:52.9706310Z test_modules_xpu.py::TestModuleXPU::test_to_empty_nn_GRUCell_swap_False_xpu_float32 PASSED [ 82%]
2024-11-12T10:34:52.9724647Z test_modules_xpu.py::TestModuleXPU::test_to_empty_nn_GRUCell_swap_True_xpu_float32 PASSED [ 82%]
2024-11-12T10:34:54.9385484Z test_modules_xpu.py::TestModuleXPU::test_to_nn_GRUCell_swap_False_set_grad_False_xpu_float32 PASSED [ 91%]
2024-11-12T10:34:54.9405020Z test_modules_xpu.py::TestModuleXPU::test_to_nn_GRUCell_swap_False_set_grad_True_xpu_float32 PASSED [ 91%]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants