You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that we know that using a translation model is beneficial, we would like to make it more robust.
Specifically:
We find that the model works decently when the input is a single word or a short sentence,
but not when the input is a long sentence or a paragraph. (In practice, we use sentence-splitting before translating, but this is not ideal, for context dependent info)
The model might not be accurate to simple semantic variations (desk vs table), likely since it is trained
from scratch, with a low-data setting.
To address these issues, we propose curating multiple data sources and fine-tuning LLMs.
The parallel data from SignBank+ is of good quality (not perfect).
We can use monolingual data alongside language models to generate synthetic sentence level data.
This would be similar to this paper replacing the "rule-based" approach with a large language model.
Key phrases can be extracted from the SignBank+ data, and understood as "template + slots"
including fingerspelling can be used to generate high quality synthetic data by replacing the fingerspelled entity.
Large sign language translation datasets can be automatically segmented and transcribed. This will create a large multilingual parallel document level dataset, with low quality SignWriting.
Once data is collected, we will need to find a training recipe that makes sense with
multiple languages and various data proportions, for either of the translation directions.
We would treat the existing models as baselines, and evaluate SignWriting output using signwriting-evaluation
The text was updated successfully, but these errors were encountered:
If the approach heavily relies on linguistic information, similar to this paper there are some books, such as for ASL or BSL - I own PDF versions.
If we rather rely on examples, this ASL phrase book can be useful. This is going more to the territory of sign-gpt where perhaps we can train a large model on all of this information, and then use the model to generate new data, useful as a synthetic baseline for translation.
Now that we know that using a translation model is beneficial, we would like to make it more robust.
Specifically:
but not when the input is a long sentence or a paragraph. (In practice, we use sentence-splitting before translating, but this is not ideal, for context dependent info)
from scratch, with a low-data setting.
To address these issues, we propose curating multiple data sources and fine-tuning LLMs.
This would be similar to this paper replacing the "rule-based" approach with a large language model.
including fingerspelling can be used to generate high quality synthetic data by replacing the fingerspelled entity.
Once data is collected, we will need to find a training recipe that makes sense with
multiple languages and various data proportions, for either of the translation directions.
We would treat the existing models as baselines, and evaluate SignWriting output using signwriting-evaluation
The text was updated successfully, but these errors were encountered: