Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

n:m:g sparse format #3

Open
chhzh123 opened this issue Apr 25, 2023 · 2 comments
Open

n:m:g sparse format #3

chhzh123 opened this issue Apr 25, 2023 · 2 comments

Comments

@chhzh123
Copy link

Hi, thanks for open-sourcing the great work, which is very helpful for sparse deep learning workloads. I notice there is a 𝑛:𝑚:𝑔 sparsity layout in your paper, but I could not find the GroupedNMSparsifier class in this repository. Could you kindly point me to that implementation?

You also mentioned "CPU implementations for 𝑛:𝑚:𝑔 sparsity were compiled with GCC 8.4", but it seems this repository only contains the Python code. Will you release the kernel implementation later?

@and-ivanov
Copy link
Contributor

We have just released our implementation, and you can view an example of how to use it here:

sten/tests/test_nmg.py

Lines 6 to 30 in f2a5aa0

def test_bert_inference():
model = torch.hub.load(
"huggingface/pytorch-transformers", "model", "bert-base-uncased"
)
input = torch.randint(low=0, high=100, size=(8, 512))
weights_to_sparsify = [
module_name + ".weight"
for module_name, module in model.named_modules()
if (
isinstance(module, torch.nn.modules.linear.Linear)
and "encoder.layer" in module_name
)
]
assert weights_to_sparsify
sb = sten.SparsityBuilder()
for weight in weights_to_sparsify:
sb.set_weight(
name=weight,
initial_sparsifier=sten.GroupedNMSparsifier(n=3, m=6, g=4),
out_format=sten.GroupedNMTensor,
)
sparse_model = sb.get_sparse_model(model)
output = sparse_model(input)

@rexxy-sasori
Copy link

It seems that ur code is baed on CPU. I tried to time line 30 and benchmark it against output = model(input), the dense version. It seems a lot slower. In addition, how can I make it to GPU? calling .cuda() on model and input?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants