quantized model : all convolutions are in fp32 #3235
-
i am running this example : https://github.com/openvinotoolkit/nncf/blob/develop/examples/post_training_quantization/torch/ssd300_vgg16/main.py
but when i read the xml file and check at precision on conv i see no "int8" i see sections like this only:
|
Beta Was this translation helpful? Give feedback.
Answered by
AlexKoff88
Feb 3, 2025
Replies: 1 comment
-
Hi @etienne87, thanks for your interest. You should search for "i8" in the .xml file.
and other relevant documents. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
MaximProshin
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @etienne87, thanks for your interest.
You should search for "i8" in the .xml file.
You need more details how quantization is represented and used in the OpenVINO IR, please refer to the:
and other relevant documents.