-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tests] enable test_mixed_adapter_batches_lora_opt_timing
on XPU
#2021
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with this change is that this would mean the test also runs on CPU. As the comment further below indicates, we want to avoid this to prevent flakiness:
peft/tests/test_custom_models.py
Lines 3340 to 3341 in eb5eb6e
# Measure timing of running base and adapter separately vs using a mixed batch. Note that on CPU, the | |
# differences are quite small, so this test requires GPU to avoid flakiness. |
I tried the test again just now on CPU and the time differences are indeed much smaller (~25%) compared to GPU (~150%), so this is still true. One solution would be to check if either GPU or XPU is being used.
Sure, should I add a marker called "require_torch_accelerator" just like in accelerate? |
Hmm, I think let's not go that far yet, a simple check at the start of the test function for the device is more explicit, we can add an extra decorator if we need the same check in many places. |
Sure, let me update! |
@BenjaminBossan how about the |
Sounds good, I merged that PR. |
rebase done. |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for extending this test, LGTM.
Btw. is there some public place where I can check the XPU tests?
do you mean the test summary on XPU? |
Yes, so that me and others can check the current state. |
Sure, currently we don't upload the test results to a public repo. But let me come out with a solution. Talk to you next Monday. |
Thanks. It's not super high priority, but right now if we ever break something in PEFT for XPU, we won't know until someone comes to us to report it. |
We have a 2-step plan:
|
Thanks for the update on the plan. |
After Fix:
Just like other tests in this file, this function should not only apply to NV GPU. We can actually remove this test marker.