Skip to content

Commit

Permalink
FIX [PromptTuning] Simple fix for transformers >= 4.38 (#1484)
Browse files Browse the repository at this point in the history
* fix for transformers >= 4.38

* style
  • Loading branch information
younesbelkada authored Feb 19, 2024
1 parent ede3c7d commit 8a0dce2
Showing 1 changed file with 6 additions and 0 deletions.
6 changes: 6 additions & 0 deletions src/peft/peft_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -1205,6 +1205,12 @@ def prepare_inputs_for_generation(self, *args, task_ids: torch.Tensor = None, **
model_kwargs["inputs_embeds"] = torch.cat((prompts, inputs_embeds), dim=1)
model_kwargs["input_ids"] = None

# For transformers>=4.38.0 - for some architectures such as Llama, `cache_position` is
# passed in the forward pass to keep track of the position ids of the cache. We have to
# pop that from `model_kwargs` as `cache_position` is properly created by the model, using the passed
# `inputs_embeds`: https://github.com/huggingface/transformers/blob/593230f0a1150ea9c0477b9d859f25daf73c8c33/src/transformers/models/llama/modeling_llama.py#L956
_ = model_kwargs.pop("cache_position", None)

return model_kwargs


Expand Down

0 comments on commit 8a0dce2

Please sign in to comment.