-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable graph check while tracing #1103
Disable graph check while tracing #1103
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@@ -23,6 +23,7 @@ | |||
from transformers.generation import GenerationMixin | |||
from transformers.utils import is_tf_available, is_torch_available | |||
|
|||
from openvino.frontend.pytorch.ts_decoder import TorchScriptPythonDecoder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer if we used something like TorchScriptPythonWrapper
or TorchScriptPythonModel
, the decoder part doesn't make sense especially since we are using this wrapper on top of diffusers text_encoder/unet/transformer etc.
Will this wrapper ever change and do decoder-specific stuff ? if npt why does it have Decoder
in its name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a "decoder" in a sense that it is used to decode the pytorch model. It is used in openvino as an abstraction to get the graph from the model. It is an internal class, but it can be used to provide such custom parameters for tracing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this naming does not have relation to decoder models, Decoder is internal component of openvino responsible for decoding of original framework model during conversion. All that time we use it internally inside convert_model, now it is created explicitly for resolving advanced conversion options
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we can rename it on import. Will TorchScriptPythonWrapper
be a better name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh okay, confusing name but okay for me if it's part of the openvino naming conventions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@IlyasMoutawwakil, can we merge this then?
Per @eaidova ask I moved decoder import inside function, can be useful in case openvino was custom built without pytorch support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, can merge once tests pass.
…vailable in openvino
What does this PR do?
This is alternative to #1064. It will only work with
openvino>=2025.0
(openvinotoolkit/openvino#28328)By disabling checking graph after trace we reduce memory usage in diffusers.
Before submitting