-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a section on back-translation best practice, wrt to evaluation #83
Comments
Possibly relevant here:
|
has this to say: |
Which seems to suggest that the method is:
With the theory being that as you generate better, the SLT should get more accurate. They don't train a backtranslation model themselves, that's independent |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
maybe also need a note about back translation: that people use it (progressive transformers, sign llm), but the outputs are incoherent. this is because people train the backtranslation models on the translation model outputs, and not independently, as one should.
Originally posted by @AmitMY in #77 (comment)
The text was updated successfully, but these errors were encountered: