Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Loop Specialization]: Specialize loops containing masked operations with loop invariant mask #3586

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
// RUN: triton-opt %s -triton-intel-remove-masks -triton-raise-block-pointer -canonicalize | FileCheck %s

module {
// COM: Derived from tutorial 03-matrix-multiplication.
tt.func public @matmul_kernel(%arg0: !tt.ptr<f16> {tt.divisibility = 16 : i32}, %arg1: !tt.ptr<f16> {tt.divisibility = 16 : i32}, %arg2: !tt.ptr<f16> {tt.divisibility = 16 : i32}, %arg3: i32 {tt.divisibility = 16 : i32}, %arg4: i32 {tt.divisibility = 16 : i32}, %arg5: i32 {tt.divisibility = 16 : i32}, %arg6: i32 {tt.divisibility = 16 : i32}, %arg7: i32 {tt.divisibility = 16 : i32}, %arg8: i32 {tt.divisibility = 16 : i32}) {
%c31_i32 = arith.constant 31 : i32
%cst = arith.constant dense<0.000000e+00> : tensor<64x128xf32>
Expand Down
2 changes: 2 additions & 0 deletions third_party/intel/backend/compiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,8 @@ def make_ttir(mod, metadata, opt):
pm.enable_debug()
passes.common.add_inliner(pm)
passes.ttir.add_combine(pm)
passes.common.add_cse(pm)
passes.common.add_licm(pm)
intel.passes.ttir.add_remove_masks(pm)
if raise_block_ptr_flags['enabled']:
ignore_masks = True if raise_block_ptr_flags['ignore-masks'] else False
Expand Down
Loading
Loading