Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Lookup] Relax input datatype constraints #1267

Merged
merged 5 commits into from
Feb 20, 2025
Merged

Conversation

iksnagreb
Copy link
Contributor

@iksnagreb iksnagreb commented Jan 28, 2025

FINN and QONNX generally assume float32 as the container datatype of tensors. However, Gather/Lookup export with int64 tensors as the index input, which makes more sense, but is not properly handled by our compiler infrastructure. Instead of refactoring container datatype handling throughout the code base, for now it seems to be sufficient to disable some assertions or at least relax them to warnings. A proper refactoring of the Lookup operator might be necessary in the near future anyway.

@auphelia auphelia self-requested a review February 7, 2025 16:38
not float32 as expected."""

# Make sure the input has the right container datatype
if inp.dtype is not np.float32:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come you needed to add this in StreamingFIFO as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, not exactly sure, I have seen this assertion fail in combination with Lookup. And I think, at least for simulation, it is indeed to restrictive in general: Almost always we assume tensors to be stored in float32 containers and usually QONNX attaches a datatype annotation (to represent the actual, quantized datatype) to this. However, some transformations, operators and also some numpy functions used by the infrastructure do not always keep these two concept as cleanly separated as they should (some numpy functions for example implicitly default to float64, we have already seen this as part of the RoundAndClipThresholds issue..., in this case, Lookup/Gather wants some int64 type, I guess). Whether the whole float32 container type thing really make sense or should be reworked to respect more numpy types is another discussion, but for now I think, anytime we are asserting something about the dtype, i.e., the container type, we should relax this to some warning alongside .astype(np.float32) to restore the expected behavior and just check whether the simulation still yields the expected results.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warnings have a tendency to be overlooked and this code changes the model like a transformation would. If I have your ok, I would rather suggest a temporary interpretation of the dtype as float32 for the execution but afterwards reverting this back to what the tensor dtype was. For the LookUp layer, I am ok with these changes for now because it is more contained and I believe that layer needs a bigger refactoring soon.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not change the model, this is only part of the execute_node rtlsim infrastructure and if you look at the output side, it already does something similar, just without any warning/error:

output = np.asarray([output], dtype=np.float32).reshape(*oshape)

And likewise the cppsim python/numpy fallback:

output = np.asarray([output], dtype=np.float32).reshape(*exp_shape)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, I see! Thanks, I mistook inp for the tensor itself and not the input values.

Copy link
Collaborator

@auphelia auphelia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the clarifications, looks good to me!

@auphelia auphelia merged commit 76eede6 into Xilinx:dev Feb 20, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants