-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide GPU support #21438
Comments
Subtask of #22117 |
@yeandy what pieces are missing to support this? |
The main PR to address this should be #22795 |
No, we can close this now. |
.close-issue |
@yeandy please clarify the scope of this issue. Thanks! |
Clarification on this issue: PyTorch RunInference converts data and model to GPU by using the tensor.to(device) command to move a tensor to a device. We also do implicit GPU -> CPU conversion if GPU doesn't exist in the worker. |
FYI, you can edit the description or title of the issues. |
Probably not as a non-committer unfortunately |
Pytorch and Tensorflow have GPU trained models. Need to support this in the RunInference classes, and also make sure they get configured in Dataflow.
Imported from Jira BEAM-13986. Original Jira may contain additional context.
Reported by: yeandy.
Subtask of issue #21435
The text was updated successfully, but these errors were encountered: