Loading Roberta Model #1175
Unanswered
Tacacs-1101
asked this question in
Q&A
Replies: 1 comment
-
If you want your model to work right, you need to give it input data with the exact same format as the input data it was trained on. So, the best way is if you implement the Roberta tokenizer in Java. You can look at our nlp folder to see some of the utilities we have for working with preprocessing and tokenizers, especially the nlp pre-processing folder. Alternatively, you could see if something like sentencepiece will work or find some way to call python from Java like Jython. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I want to load Roberta model in a java environment. The trained model can be loaded in DJL but how to tokenize the input sentence during inference in DJL as Roberta uses a different tokenizer than BERT.
Beta Was this translation helpful? Give feedback.
All reactions