Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding ttir.repeat op in MLIR #1941

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

sdjordjevicTT
Copy link
Contributor

Implementing TTIR repeat op. This PR lowers TTIR repeat op to TTNN repeat op. Added compiler and silicon tests for repeat op.

I used this opportunity to group tests for data movement ops under the same folder.

Closes #1916

@sdjordjevicTT sdjordjevicTT force-pushed the sdjordjevic/adding_repeat_op branch from 3cfbbcc to 4026340 Compare January 23, 2025 12:34
@svuckovicTT
Copy link
Contributor

Please add an EmitC test as well :)

@nvukobratTT
Copy link
Contributor

FYI @ashokkumarkannan1, one this change lands on MLIR main, we should map it to the FFE and test it out :))

// RepeatOp
//===----------------------------------------------------------------------===//

// BroadcastOp verification
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: RepeatOp

@sdjordjevicTT
Copy link
Contributor Author

Please add an EmitC test as well :)

Yep, I will add it! :)

%input = ... : tensor<2x3xf32>

// Repeat each dimension twice
%repeated = "repeat"(%input) {repeat_dimensions = dense<[2, 2]> : tensor<2xi64>} : tensor<2x3xf32> -> tensor<4x6xf32>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: ttir.repeat
Dense is written as: array<i32: 2, 2>

@@ -803,6 +803,40 @@ def TTIR_ConcatOp : TTIR_DPSOp<"concat"> {
let hasVerifier = 1;
}

def TTIR_RepeatOp : TTIR_DPSOp<"repeat"> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we have discussed offline, I think both solutions are fine (broadcast vs repeat), I just believe we should have some note that they are currently semantically the same, either in the comment or in the op description.

::mlir::RankedTensorType outputType = getOutput().getType();
llvm::ArrayRef<int32_t> repeatDimensions = getRepeatDimensions();

// Input tensor and repeate dimension argument must have same rank
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: dot at the end of the comment.


// -----

// Verify that the parsing fails if the input tensor and repeat_dimensions attribute doesn't have the same rank
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment should reflect behaviour.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Ops] Support for ttir.repeat op (ttnn.repeat)
5 participants