Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src: cpu: aarch64: lowp_matmul_sq: Make weights constant #2212

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

fadara01
Copy link
Contributor

@fadara01 fadara01 commented Nov 8, 2024

Description

Setting the weights as constant allows us to avoid redundant pretranspose operations in Arm Compute Library (ACL) every time execute is called (they are now run once and cached). This delives big speedups especially for relatively small matmuls.
Note that this is a temp fix that needs to be handled carefully by primitive caches in frameworks, since the ACL object is now holding more state - i.e. we want to make sure that the cahce maps a layer with a specific set of weights to the oneDNN primitive storing those weights.
We're currently working on the proper fix for this which involves making lowp_gemm stateless and fixed-format in ACL and oneDNN.

Fixes # (github issue)

Checklist

General

  • Do all unit and benchdnn tests (make test and make test_benchdnn_*) pass locally for each commit?
  • Have you formatted the code using clang-format?

Performance improvements

  • Have you submitted performance data that demonstrates performance improvements?

New features

  • Have you published an RFC for the new feature?
  • Was the RFC approved?
  • Have you added relevant tests?

Bug fixes

  • Have you included information on how to reproduce the issue (either in a github issue or in this PR)?
  • Have you added relevant regression tests?

RFC PR

  • Does RFC document follow the template?
  • Have you added a link to the rendered document?

renato-arantes and others added 3 commits November 4, 2024 09:53
Setting the weights as constant allows us to avoid redundant pretranspose
operations in Arm Compute Library (ACL) every time execute is called
(they are now run once and cached). This delives big speedups especially
for relatively small matmuls.
Note that this is a temp fix that needs to be handled carefully by primitive
caches in frameworks, since the ACL object is now holding more state - i.e.
we want to make sure that the cahce maps a layer with a specific set of weights
to the oneDNN primitive storing those weights.
We're currently working on the proper fix for this which involves making
lowp_gemm stateless and fixed-format in ACL and oneDNN.
arm_compute::TensorShape(N(), K()), 1,
acl_utils::get_acl_data_t(wei_d.data_type(), true),
arm_compute::QuantizationInfo(1.0, 0, true));
almc_.wei_tensor_info.set_are_values_constant(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @fadara01 ,
What does constant mean here? Is it about constant values within the tensor or just that the shape is constant?
If it is about constant values in weights, it does not seem that primitive creation takes the weights, does it mean the primitive state changes upon first execution?

}

status_t execute(const exec_ctx_t &ctx) const {
std::lock_guard<std::mutex> _lock {this->mtx};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the mutex protection only about the quantization parameters that are reset, or is there something else due to constant weights?

Copy link
Contributor

@mgouicem mgouicem left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One general note here, oneDNN primitive API has no concept of constant memory object, so such an optimization is unsafe (there is no guarantee from primitive implementation perspective that weights will not change between execute calls).

oneDNN Graph API has that concept though, and it might make sense to add it to primitive API if it helps. But I would hold off commiting this patch unless we extend API with constant memory objects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants