Skip to content

Update NER model with huggingface transformer #12914

Discussion options

You must be logged in to vote

Hi @shahryary, for the training run with your data set A this config is fine. If you just swap out the dataset for the second run, you'll have your model learn from scratch though. Instead, source both the transformer and the ner from the pipeline that was trained on A. From then on you can use the same config as long as you always source from the latest model they trained.

In practice, the whole [components.ner] would become just one line:

source = "./current_model"

(Where ./current_model is the location of the model trained with the previous dataset.)

Note that catastrophic forgetting might become an issue if you train with datasets with diverging distributions over time. There are some…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@shahryary
Comment options

Answer selected by rmitsch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
training Training and updating models feat / ner Feature: Named Entity Recognizer feat / transformer Feature: Transformer
2 participants