Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Closes #53.
Waiting for #208.
This PR adds three different prediction heads for
XModelWithHeads
classes, depending on the model architecture:add_causal_lm_head()
adds a causal LM head for classes that support this type of head in transformers, e.g. GPT-2, BERT, ...add_masked_lm_head()
adds a masked LM head for models with MLM, e.g. BERT, RoBERTa, ...add_seq2seq_lm_head()
adds a sequence-to-sequence LM head for encoder-decoder models, e.g. BARTAll heads can be automatically converted from their respective static-head counterparts (e.g.
seq2seqlm
fromBartForConditionalGeneration
).To ensure that all conversions work as expected, a new test module was added in
test_adapter_conversion.py
.