-
Notifications
You must be signed in to change notification settings - Fork 970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove out-dated xpu device check code in get_balanced_memory
#2826
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Love the simplification 😍
Could we actually take this a bit further now that the logic is aligned?
Perhaps getting the device type expected under the is_xxx_available()
and then only calling the
num_devices = len([d for d in max_memory if torch.device(d).type == expected_device_type and max_memory[d] > 0])
section once? WDYT
Yes @muellerzr looks like a good abstraction, We are internally validating the "gpu" predicate of the validation logic . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
@muellerzr PR updated with |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beautiful 😍
Nice work!
What does this PR do?
This PR removes the outdated and unnecessary code of xpu device check in the following code snippet
The reasons are
d
will definitively not be "cpu"torch.device(d).type == "xpu"
is enough to check xpu device just like the case in MLU and NPU.Who can review?
@SunMarc and @muellerzr