Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixing bugs found with TorchModel().evaluate() #91

Closed
wants to merge 1 commit into from

Conversation

kathryn-baker
Copy link
Contributor

Fixing the following errors:

  • native integer types from Python not recognised or utilised
  • TypeError not raised with failing integer type causing evaluate() to fail silently instead of raising an error
  • IndexError raised by evaluate() due to uneven dimensions of the default_tensor:
Traceback (most recent call last):
  File "User\lume-deployment\test.py", line 66, in <module>
    outputs = lumemodel.evaluate(input_dict)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "User\AppData\Local\anaconda3\envs\mlflow\Lib\site-packages\lume_model\models\torch_model.py", line 123, in evaluate
    input_tensor = self._arrange_inputs(formatted_inputs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "User\AppData\Local\anaconda3\envs\mlflow\Lib\site-packages\lume_model\models\torch_model.py", line 279, in _arrange_inputs      
    input_tensor = torch.tile(default_tensor, dims=(*input_shapes[0], 1))
                                                     ~~~~~~~~~~~~^^^
IndexError: list index out of range

Also some odd formatting changes from my auto-formatter

@pluflou pluflou closed this Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants