Using Kotlin for Inference with custom model #857
-
Hello, I've been encountering an issue trying to utilize DJL for inference on a custom PyTorch model. I recently trained a PyTorch model on the 17 flowers dataset. From there I updated to a JIT model via tracing, and verified the inference output to match on the 17 flowers images. I attempted to leverage the notebook here. I converted to Kotlin and had no expectation that would cause any real issues. Here is a sanitized version of what I have:
I attempted to verify the inference output was matching, and I saw a great number of mismatches. I did not write code to determine the exact percentage, but my expectation was 100% match. I was curious if there were any ideas of where to look to see why the mismatch might be occurring or if there is something obvious that would cause this to happen? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
One reason you could see the accuracy drop is if your pre-processing differs between the Java and Python. I would recommend checking this first. I see two potential issues with it:
|
Beta Was this translation helpful? Give feedback.
One reason you could see the accuracy drop is if your pre-processing differs between the Java and Python. I would recommend checking this first. I see two potential issues with it: