Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX Correctly determine word embeddings on Deberta #2257

Merged

Conversation

BenjaminBossan
Copy link
Member

Description

After a recent change in transformers, PEFT could no longer determine the word embeddings from Deberta. This PR provides a very minimal fix that correctly determines the word embeddings again.

Failing CI

To reproduce, run: pytest tests/test_feature_extraction_models.py -k "prompt_tuning and deberta" -v

Details

Previously, the word embeddings were determined in the following manner:

  1. Find the transformers_backbone by checking the base model's children for PreTrainedModel instances
  2. If not found, the model itself is considered the transformers backbone.
  3. On the backbone, check for modules whose weight has the same size as the vocab size. This module is now assumed to be the word embeddings.

(code)

Before the mentioned transformers PR, 1. did not find anything, so 2. was applied. After the PR, however, the DebertaEncoder is now an instance of PreTrainedModel (asked internally, this is intended). Therefore, the encoder is now considered the transformer backbone. But the encoder does not have the word embeddings attribute, therefore step 3. fails.

The fix of this PR is to first explicitly check for model.embeddings.word_embeddings and if this attribute is found, use it as the word embeddings. Only when it's not found do we use the other method described above. This way, we can successfully determine the word embeddings on models like Deberta.

This whole code is a bit messy and could probably be improved. However, changing the logic too much could inadvertently break for some existing model architectures that are not included in the tests. Therefore, I chose this method which leaves the existing logic mostly intact.

For reviewers: Note that the previous logic has not been changed, just moved into an if block. The actual diff is thus much smaller than it appears at first glance.

Description

After a recent change in
transformers (huggingface/transformers#22105),
PEFT could no longer determine the word embeddings from Deberta. This PR
provides a very minimal fix that correctly determines the word
embeddings again.

Details

Previously, the word embeddings were determined in the following manner:

1. Find the transformers_backbone by checking the base model's children
for PreTrainedModel instances
2. If not found, the model itself is considered the transformers
backbone.
3. On the backbone, check for modules whose weight has the same size as
the vocab size. This module is now assumed to be the word embeddings.

Before the mentioned transformers PR, 1. did not find anything, so 2.
was applied. After the PR, however, the DebertaEncoder is now an
instance of PreTrainedModel (asked internally, this is intended).
Therefore, the encoder is now considered the transformer backbone. But
the encoder does not have the word embeddings attribute, therefore step
3. fails.

The fix of this PR is to first explicitly check for
model.embeddings.word_embeddings and if this attribute is found, use it
as the word embeddings. Only when it's not found do we use the other
method described above. This way, we can successfully determine the word
embeddings on models like Deberta.

This whole code is a bit messy and could probably be improved. However,
changing the logic too much could inadvertently break for some existing
models that are not included in the tests. Therefore, I chose this
method which leaves the existing logic mostly intact.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@BenjaminBossan BenjaminBossan merged commit f86522e into huggingface:main Dec 4, 2024
14 checks passed
@BenjaminBossan BenjaminBossan deleted the fix-deberta-word-embeddings branch December 4, 2024 14:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants