Detailed usage examples and instructions can be found in the Full Documentation.
Simple installation from PyPI
We are planing to release version 1.0.0 in November. Meanwhile we recommend you to use our Pre-release of version and open issues if you find something unexpected:
pip install unbabel-comet==1.0.0rc9
To develop locally install Poetry and run the following commands:
git clone https://github.com/Unbabel/COMET
poetry install
Examples from WMT20:
echo -e "Dem Feuer konnte Einhalt geboten werden\nSchulen und Kindergärten wurden eröffnet." >> src.de
echo -e "The fire could be stopped\nSchools and kindergartens were open" >> hyp.en
echo -e "They were able to control the fire.\nSchools and kindergartens opened" >> ref.en
comet-score -s src.de -t hyp.en -r ref.en
You can select another model/metric with the --model flag and for reference-free (QE-as-a-metric) models you don't need to pass a reference.
comet-score -s src.de -t hyp.en --model wmt20-comet-qe-da
Following the work on Uncertainty-Aware MT Evaluation you can use the --mc_dropout flag to get a variance/uncertainty value for each segment score. If this value is high, it means that the metric is less confident in that prediction.
comet-score -s src.de -t hyp.en -r ref.en --mc_dropout 30
When comparing two MT systems we encourage you to run the comet-compare
command to get a contrastive statistical significance with bootstrap resampling (Koehn, et al 2004).
comet-compare --help
For even more detailed MT contrastive evaluation please take a look at our new tool MT-Telescope.
from comet import download_model, load_from_checkpoint
model_path = download_model("wmt20-comet-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Dem Feuer konnte Einhalt geboten werden",
"mt": "The fire could be stopped",
"ref": "They were able to control the fire."
},
{
"src": "Schulen und Kindergärten wurden eröffnet.",
"mt": "Schools and kindergartens were open",
"ref": "Schools and kindergartens opened"
}
]
seg_scores, sys_score = model.predict(data, batch_size=8, gpus=1)
All the above mentioned models are build on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
We recommend the two following models to evaluate your translations:
wmt20-comet-da
: DEFAULT Reference-based Regression model build on top of XLM-R (large) and trained of Direct Assessments from WMT17 to WMT19. Same aswmt-large-da-estimator-1719
from previous versions.wmt20-comet-qe-da
: Reference-FREE Regression model build on top of XLM-R (large) and trained of Direct Assessments from WMT17 to WMT19. Same aswmt-large-qe-estimator-1719
from previous versions.
These two models were developed to participate on the WMT20 Metrics shared task (Mathur et al. 2020) and to the date, these are the best performing metrics at segment-level in the recently released Google MQM data (Freitag et al. 2020). Also, in a large-scale study performed by Microsoft Research these two metrics are ranked 1st and 2nd in terms of system-level decision accuracy (Kocmi et al. 2020).
For more information about the available COMET models we invite you to read our metrics descriptions here
Instead of using pretrained models your can train your own model with the following command:
comet-train --cfg configs/models/{your_model_config}.yaml
In order to run the toolkit tests you must run the following command:
coverage run --source=comet -m unittest discover
coverage report -m