-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default combined CorefUD model has inconsistent outputs for English #1450
Comments
I think this is the one that's trained with the mixed multilingual backbone, and possibly with a mixture with/without singletons; we can ship a GUM only model, or even perhaps OntoNotes + Singleton. @amir-zeldes — do you have the OntoNotes augmented dataset somewhere? Would love to train a test model off of that. |
@Jemoka that would be amazing! I think we'd actually want all of those different models if possible, since I think ON w/singletons+GUM would be great for mention detection, but they have rather different coref guidelines, so that could create a hodgepodge of inconsistent clustering predictions. It's an empirical question, but I could imagine if you were scoring GUM-style coref incl. singletons, throwing in all predictions from both models might actually outperform either model by itself and prevent the low recall issues with ON-only models. Then again it might need some rule based postprocessing... @yilunzhu has put the ON singleton predictions up on GitHub, I think this is the latest (Yilun please correct me if there's something newer) For training with GUM it might also be worth waiting a little - we're close to ready to release GUM v11, with new data, probably in about 2 weeks. I can post to this thread when that happens if that's of interest. |
Yes this is the latest version. |
Sounds good; will hold off on that. In the meantime I will train an English Ontonotes + Singletons model and reprot back on this thread. |
Update: Update 2: |
Done! CC @amir-zeldes
Here's the weights: https://drive.google.com/drive/folders/14EwOVRSrdbp9cjARgTu-DNaCjryNePJW?usp=sharing To use them: nlp = stanza.Pipeline('en', processors='tokenize,coref', coref_model_path="./the_path_to/roberta_lora.pt") |
Thank you, @Jemoka ! To make it more convenient to get the model, I uploaded it to HuggingFace. You should be able to download it using Stanza version 1.10 with:
|
OK, coref still has some issues for the sample text I was using above, but this is much much better for mention detection: The only sort of systematic concerns I have about it are direct results of ON guidelines, for example the treatment of appositions (so we get But either way, this is worlds better, thanks so much for making this model available! We're getting close to wrapping up the latest GUM release, I'll post a link to the data as soon as it's ready. |
sounds good; once the next GUM is released I'll be glad to build a model for that. There's a chance that upping top-k in the initial filtering step will be better for things like coordination with a lot of nesting. |
I've been testing the CorefUD-trained Stanza model on English and seeing some inconsistent results, especially with regard to singletons. Since the model is trained on data that has singletons (but possibly also data that has no singletons? Is ParCorFull included for English? Or is the default a totally multilingual model?), it should produce predictions for most obvious noun phrases and for the most part it does:
However other times it ignores very obvious mentions, perhaps because figuring out an antecedent is non-trivial:
Notice that the model misses even completely reliable mention spans such as the pronouns "I" or "their", which are virtually guaranteed to be a mention (even if we can find no antecedent, at least in a corpus with singletons they would still be annotated).
What I'm actually looking to get is English GUM-like results, and I'm wondering whether this is the result of multi-dataset training/conflicting guidelines (esp. regarding singletons). Is there any chance to get a GUM-only trained model for English?
The text was updated successfully, but these errors were encountered: