Skip to content

ValueError: [E203] If the transformer embedding layer is not updated during training, make sure to include it in 'annotating components' #13332

Discussion options

You must be logged in to vote

In cases where old data is unavailable, but the model's original performance on the old data was quite good (let's say >90% F), you could consider creating silver annotations with your model on texts resembling the old data - and then mixing that silver data in with the annotations on your new data. In general, I would advice to train from scratch (old+new data) instead of using the rehearsel functionality (which is indeed in experimental and as we found out, doesn't actually work with annotating components yet :/)

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@Shreya-Goswami
Comment options

@svlandeg
Comment options

Answer selected by svlandeg
@Shreya-Goswami
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Bugs and behaviour differing from documentation feat / pipeline Feature: Processing pipeline and components
2 participants