Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allowing torchani to use GPU #327

Merged
merged 6 commits into from
Nov 10, 2021
Merged

Allowing torchani to use GPU #327

merged 6 commits into from
Nov 10, 2021

Conversation

kexul
Copy link
Contributor

@kexul kexul commented Nov 2, 2021

Description

Added config for torchani to use GPU, related to this issue #326.
Now we can use local_options={'device':'cuda'} to assign the device:

import qcengine as qcng
import qcelemental as qcel

mol = qcel.models.Molecule.from_data("""
O  0.0  0.000  -0.129
H  0.0 -1.494  1.027
H  0.0  1.494  1.027
""")

inp = qcel.models.AtomicInput(
    molecule=mol,
    driver="energy",
    model={"method": "ani2x"}
    )

ret = qcng.compute(inp, "torchani", local_options={'device':'cuda'})
print(ret.return_result)

kexul added 2 commits November 2, 2021 21:18
Add option allowing perform calculation in gpu
Allow the model to run on GPU
@loriab
Copy link
Collaborator

loriab commented Nov 2, 2021

Thanks for the PR, @kexul . @dotsdl, how does this relate to how OpenMM or OpenFF in general switches cpu/gpu modes?

@codecov
Copy link

codecov bot commented Nov 2, 2021

Codecov Report

Merging #327 (d7ced82) into master (0612e81) will increase coverage by 0.00%.
The diff coverage is 66.66%.

@WardLT
Copy link
Collaborator

WardLT commented Nov 2, 2021 via email

kexul added 3 commits November 3, 2021 10:06
more cpu added
automatically use gpu when available
delete device config
@kexul
Copy link
Contributor Author

kexul commented Nov 3, 2021

Instead of having the user manually turn on GPU usage, could we use torch to detect the presence of a GPU and automatically use it?

That sounds reasonable, I've changed the code according to your suggestion.

Copy link
Collaborator

@WardLT WardLT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look great to me. Thanks for the contribution!

Once you fix the linting problems with make format, this should be ready to merge.

@loriab , do you want to review this as well? Or, can I click the button?

@@ -158,7 +158,7 @@ class TaskConfig(pydantic.BaseModel):
scratch_messy: bool = pydantic.Field(
False, description="Leave scratch directory and contents on disk after completion."
)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introducing the whitespace here probably caused a problem with our linter.

Could you run make format and issue a commit with the changed files?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done by d7ced82 😉

qcengine/programs/torchani.py Show resolved Hide resolved
code format using `make format`
@loriab
Copy link
Collaborator

loriab commented Nov 10, 2021

Thanks @WardLT , merge at will.

@WardLT WardLT merged commit 4a92ee6 into MolSSI:master Nov 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants