You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 3, 2025. It is now read-only.
To get the imagenette-validation-samples directory, run the following from the container:
$ wget https://github.com/sayakpaul/deploy-hf-tf-vision-models/releases/download/3.0/imagenette-validation-samples.tar.gz
$ tar xf imagenette-validation-samples.tar.gz
When running conversion, I am getting:
Traceback (most recent call last):
File "convert_to_tensor.py", line 41, in <module>
converter.build(input_fn=calibration_input_fn(all_images_bytes))
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1447, in build
func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1602, in __call__
return self._call_impl(args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/wrap_function.py", line 243, in _call_impl
return super(WrappedFunction, self)._call_impl(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1620, in _call_impl
return self._call_with_flat_signature(args, kwargs, cancellation_manager)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1669, in _call_with_flat_signature
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 1860, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py", line 497, in call
outputs = execute.execute(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node 'StatefulPartitionedCall/PartitionedCall/map/TensorArrayUnstack/TensorListFromTensor' defined at (most recent call last):
File "convert_to_tensor.py", line 40, in <module>
converter.convert()
Node: 'StatefulPartitionedCall/PartitionedCall/map/TensorArrayUnstack/TensorListFromTensor'
Detected at node 'StatefulPartitionedCall/PartitionedCall/map/TensorArrayUnstack/TensorListFromTensor' defined at (most recent call last):
File "convert_to_tensor.py", line 40, in <module>
converter.convert()
Node: 'StatefulPartitionedCall/PartitionedCall/map/TensorArrayUnstack/TensorListFromTensor'
2 root error(s) found.
(0) INVALID_ARGUMENT: Tensor must be at least a vector, but saw shape: []
[[{{node StatefulPartitionedCall/PartitionedCall/map/TensorArrayUnstack/TensorListFromTensor}}]]
[[StatefulPartitionedCall/GatherV2/_426]]
(1) INVALID_ARGUMENT: Tensor must be at least a vector, but saw shape: []
[[{{node StatefulPartitionedCall/PartitionedCall/map/TensorArrayUnstack/TensorListFromTensor}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_pruned_43026]
Without converter.build() the conversion succeeds but the latency is higher.
Notes
I made the model accept compressed image string to reduce request payload sizes.
What am I missing out on?
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
System information
NVIDIA
I am using an NGC container to perform my stuff. Here's how I am running the Docker image:
After this, I get terminal access to the container.
TensorFlow build details within the container
Issue
I am trying to convert a ViT B-16 model from
transformers
. First, I am serializing it as aSavedModel
resource:Then conversion:
To get the
imagenette-validation-samples
directory, run the following from the container:When running conversion, I am getting:
Without
converter.build()
the conversion succeeds but the latency is higher.Notes
I made the model accept compressed image string to reduce request payload sizes.
What am I missing out on?
The text was updated successfully, but these errors were encountered: