Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Lookup] Relax input datatype constraints #1267

Merged
merged 5 commits into from
Feb 20, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 13 additions & 26 deletions src/finn/custom_op/fpgadataflow/hls/lookup_hls.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@

import numpy as np
import os
import warnings
from math import ceil, log2
from qonnx.core.datatype import DataType

Expand Down Expand Up @@ -87,31 +88,6 @@ def defines(self, var):
my_defines.append("#define EmbeddingType %s" % emb_hls_type)
self.code_gen_dict["$DEFINES$"] = my_defines

def read_npy_data(self):
code_gen_dir = self.get_nodeattr("code_gen_dir_cppsim")
dtype = self.get_input_datatype()
if dtype == DataType["BIPOLAR"]:
# use binary for bipolar storage
dtype = DataType["BINARY"]
elem_bits = dtype.bitwidth()
packed_bits = self.get_instream_width()
packed_hls_type = "ap_uint<%d>" % packed_bits
elem_hls_type = dtype.get_hls_datatype_str()
npy_type = "int64_t"
npy_in = "%s/input_0.npy" % code_gen_dir
self.code_gen_dict["$READNPYDATA$"] = []
self.code_gen_dict["$READNPYDATA$"].append(
'npy2apintstream<%s, %s, %d, %s>("%s", in0_%s);'
% (
packed_hls_type,
elem_hls_type,
elem_bits,
npy_type,
npy_in,
self.hls_sname(),
)
)

def dataoutstrm(self):
code_gen_dir = self.get_nodeattr("code_gen_dir_cppsim")
dtype = self.get_output_datatype()
Expand Down Expand Up @@ -273,7 +249,18 @@ def execute_node(self, context, graph):
)

inp = context[node.input[0]]
assert inp.dtype == np.int64, "Inputs must be contained in int64 ndarray"

# Make sure the input has the right container datatype
if inp.dtype is not np.float32:
# Issue a warning to make the user aware of this type-cast
warnings.warn(
f"{node.name}: Changing input container datatype from "
f"{inp.dtype} to {np.float32}"
)
# Convert the input to floating point representation as the
# container datatype
inp = inp.astype(np.float32)

assert inp.shape == exp_ishape, """Input shape doesn't match expected shape."""
export_idt = self.get_input_datatype()
odt = self.get_output_datatype()
Expand Down
16 changes: 12 additions & 4 deletions src/finn/custom_op/fpgadataflow/rtl/streamingfifo_rtl.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,10 +133,18 @@ def execute_node(self, context, graph):
elif mode == "rtlsim":
code_gen_dir = self.get_nodeattr("code_gen_dir_ipgen")
# create a npy file for the input of the node
assert (
str(inp.dtype) == "float32"
), """Input datatype is
not float32 as expected."""

# Make sure the input has the right container datatype
if inp.dtype is not np.float32:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come you needed to add this in StreamingFIFO as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, not exactly sure, I have seen this assertion fail in combination with Lookup. And I think, at least for simulation, it is indeed to restrictive in general: Almost always we assume tensors to be stored in float32 containers and usually QONNX attaches a datatype annotation (to represent the actual, quantized datatype) to this. However, some transformations, operators and also some numpy functions used by the infrastructure do not always keep these two concept as cleanly separated as they should (some numpy functions for example implicitly default to float64, we have already seen this as part of the RoundAndClipThresholds issue..., in this case, Lookup/Gather wants some int64 type, I guess). Whether the whole float32 container type thing really make sense or should be reworked to respect more numpy types is another discussion, but for now I think, anytime we are asserting something about the dtype, i.e., the container type, we should relax this to some warning alongside .astype(np.float32) to restore the expected behavior and just check whether the simulation still yields the expected results.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warnings have a tendency to be overlooked and this code changes the model like a transformation would. If I have your ok, I would rather suggest a temporary interpretation of the dtype as float32 for the execution but afterwards reverting this back to what the tensor dtype was. For the LookUp layer, I am ok with these changes for now because it is more contained and I believe that layer needs a bigger refactoring soon.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not change the model, this is only part of the execute_node rtlsim infrastructure and if you look at the output side, it already does something similar, just without any warning/error:

output = np.asarray([output], dtype=np.float32).reshape(*oshape)

And likewise the cppsim python/numpy fallback:

output = np.asarray([output], dtype=np.float32).reshape(*exp_shape)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, I see! Thanks, I mistook inp for the tensor itself and not the input values.

# Issue a warning to make the user aware of this type-cast
warnings.warn(
f"{node.name}: Changing input container datatype from "
f"{inp.dtype} to {np.float32}"
)
# Convert the input to floating point representation as the
# container datatype
inp = inp.astype(np.float32)

expected_inp_shape = self.get_folded_input_shape()
reshaped_input = inp.reshape(expected_inp_shape)
if DataType[self.get_nodeattr("dataType")] == DataType["BIPOLAR"]:
Expand Down
Loading