-
Notifications
You must be signed in to change notification settings - Fork 533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tracing option to quantize bench #3651
Closed
Closed
+221
−206
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This pull request was exported from Phabricator. Differential Revision: D68980020 |
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Summary: X-link: facebookresearch/FBGEMM#695 This diff is the nvidia mirror of D68686266, which changes dynamic grouped gemm to return a tensor of shape [total_M, N] when zero_start_index_M isnt provided. We also add appropriate tests to make sure the behavior doesnt break going forward. Reviewed By: jasonjk-park, jianyuh, jiawenliu64 Differential Revision: D68689077
…3639) Summary: X-link: facebookresearch/FBGEMM#714 D68797978 implemented a new feature that allowed partial rowwise quantization for jagged tensors in the hopes of improving MOE performance. However, it operated on the wrong dimension (oops). This update shifts the dimension to the proper per-group non zero row. Reviewed By: jasonjk-park, jiawenliu64 Differential Revision: D68872138
Summary: X-link: facebookresearch/FBGEMM#724 When benchmarking quantize functions, we'd like the overhead to mimic e2e behavior as closely as possible. For example, weights should be quantized ahead of time. The current design of quantize_bench does not allow this. To accomodate it, I've added a new optional preprocess phase that allows some transformations to be applied independently from benchmarking. Here we use it to prepare data for grouped gemm benchmarks to more accurately capture the e2e behavior. Reviewed By: jiawenliu64 Differential Revision: D68964950
Summary: X-link: facebookresearch/FBGEMM#727 Adds support for the --trace option which will produce gpu traces for each benchmarked operator. This only works internally so if tried in OSS we fall back to nullcontext. Reviewed By: jiawenliu64 Differential Revision: D68980020
c6f82a7
to
ff0150b
Compare
This pull request was exported from Phabricator. Differential Revision: D68980020 |
This pull request has been merged in 79fcd5b. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
X-link: https://github.com/facebookresearch/FBGEMM/pull/727
Adds support for the --trace option which will produce gpu traces for each benchmarked operator. This only works internally so if tried in OSS we fall back to nullcontext.
Reviewed By: jiawenliu64
Differential Revision: D68980020