Skip to content

DisplaCy UserWarning: [W006] when evaluating NER model #7672

Discussion options

You must be logged in to vote

What do the evaluation results look like for NER and NER per type before you run into this warning? It sounds like your model isn't predicting any entities for these texts.

What do the evaluation steps look like while training? Here's what it looks like for a very simple toy example where in step 3 you can see the score going up as the model improves: https://github.com/explosion/spaCy/tree/de4f4c9b8a46395b5f2be5fa80afd746c4a7f3cb/examples#-get-started-with-a-demo-project

Your check with displacy is only loading the reference docs (with your manual annotation), not the predicted annotation using the model. Instead, try something like:

docs = list(nlp.pipe([doc.text for doc in doc_bin.get_…

Replies: 2 comments 3 replies

Comment options

You must be logged in to vote
3 replies
@matteobrv
Comment options

@adrianeboyd
Comment options

@matteobrv
Comment options

Answer selected by matteobrv
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / visualizers Feature: Built-in displaCy and other visualizers
3 participants