TST: Fix some tests that would fail with torch.compile #949
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Some tests would currently fail with
torch.compile
, not because there is anything wrong with how PEFT works with compiled models, but simply because of the way the tests are written. This is because when models are compiled, the keys of the state dict change. Tests have now been adapted to unwrap the compiled model first before getting the state dict.Note that the mentioned issue does not affect saving and loading, because
save_pretrained
is already called on the original module, so there is no issue with mismatched keys.While working on this, I also fixed the docstring of
get_peft_model_state_dict
.Running the
torch.compile
tests against this branch failed, probably because it's a fork :-/ https://github.com/huggingface/peft/actions/runs/6238068686/job/16933091996