-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
src: cpu: aarch64: lowp_matmul_sq: Make weights constant #2212
base: main
Are you sure you want to change the base?
Conversation
Setting the weights as constant allows us to avoid redundant pretranspose operations in Arm Compute Library (ACL) every time execute is called (they are now run once and cached). This delives big speedups especially for relatively small matmuls. Note that this is a temp fix that needs to be handled carefully by primitive caches in frameworks, since the ACL object is now holding more state - i.e. we want to make sure that the cahce maps a layer with a specific set of weights to the oneDNN primitive storing those weights. We're currently working on the proper fix for this which involves making lowp_gemm stateless and fixed-format in ACL and oneDNN.
arm_compute::TensorShape(N(), K()), 1, | ||
acl_utils::get_acl_data_t(wei_d.data_type(), true), | ||
arm_compute::QuantizationInfo(1.0, 0, true)); | ||
almc_.wei_tensor_info.set_are_values_constant(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @fadara01 ,
What does constant mean here? Is it about constant values within the tensor or just that the shape is constant?
If it is about constant values in weights, it does not seem that primitive creation takes the weights, does it mean the primitive state changes upon first execution?
} | ||
|
||
status_t execute(const exec_ctx_t &ctx) const { | ||
std::lock_guard<std::mutex> _lock {this->mtx}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the mutex protection only about the quantization parameters that are reset, or is there something else due to constant weights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One general note here, oneDNN primitive API has no concept of constant memory object, so such an optimization is unsafe (there is no guarantee from primitive implementation perspective that weights will not change between execute calls).
oneDNN Graph API has that concept though, and it might make sense to add it to primitive API if it helps. But I would hold off commiting this patch unless we extend API with constant memory objects.
Description
Setting the weights as constant allows us to avoid redundant pretranspose operations in Arm Compute Library (ACL) every time execute is called (they are now run once and cached). This delives big speedups especially for relatively small matmuls.
Note that this is a temp fix that needs to be handled carefully by primitive caches in frameworks, since the ACL object is now holding more state - i.e. we want to make sure that the cahce maps a layer with a specific set of weights to the oneDNN primitive storing those weights.
We're currently working on the proper fix for this which involves making lowp_gemm stateless and fixed-format in ACL and oneDNN.
Fixes # (github issue)
Checklist
General
make test
andmake test_benchdnn_*
) pass locally for each commit?Performance improvements
New features
Bug fixes
RFC PR