From 36f8bd5ded5b3469f7892099590bb2405cc8f744 Mon Sep 17 00:00:00 2001 From: Driss Guessous <32754868+drisspg@users.noreply.github.com> Date: Wed, 23 Oct 2024 10:15:03 -0700 Subject: [PATCH] Fix typo in readme (#64) stack-info: PR: https://github.com/pytorch-labs/attention-gym/pull/64, branch: drisspg/stack/1 --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index cad6dea..3e0ecbf 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ cd attention-gym pip install . ``` -## 💻 Usage +## 💻 Usage There are two main ways to use Attention Gym: @@ -48,10 +48,10 @@ There are two main ways to use Attention Gym: ```python from torch.nn.attention.flex_attention import flex_attention, create_block_mask from attn_gym.masks import generate_sliding_window - + # Use the imported function in your code - sliding_window_mask = generate_sliding_window(window_size=1024) - block_mask = create_block_mask(mask_mod, 1, 1, S, S, device=device) + sliding_window_mask_mod = generate_sliding_window(window_size=1024) + block_mask = create_block_mask(sliding_window_mask_mod, 1, 1, S, S, device=device) out = flex_attention(query, key, value, block_mask=block_mask) ```