-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement NNPOps Optimised ANI #21
Conversation
Note that #20 tries to do the same. |
@not-matt, regarding the performance, could you run the benchmarks and compare with #20 (comment) |
With the bench slightly modified to fix a couple of bugs (#20 (comment)), I got 0.0031 s/step using NNPops. This is in line with the results from that PR - the implementations are pretty much identical under the hood. |
Regarding the optimization keyword, we have 3 cases:
openmmml.MLPotential('ani2x', optimise=None)
openmmml.MLPotential('ani2x', optimise='cuaev')
openmmml.MLPotential('ani2x', optimise='nnpops') I think, this will be the simplest API for a user. |
How about calling it |
If no more comments, the API should look like this: openmmml.MLPotential('ani2x', implementation='torchani')
openmmml.MLPotential('ani2x', implementation='cuaev')
openmmml.MLPotential('ani2x', implementation='nnpops') # default @not-matt could you implement? |
Yep, can do. @raimis self.model.aev_computer.use_cuda_extension = True |
@yueyericardo could you confirm? |
confirmed! |
@not-matt any progress? |
@raimis Yep just about there. Trying to test it thoroughly with each different optimisation option. |
device = torch.device('cuda') | ||
model = OptimizedTorchANI(model, species).to(device) | ||
elif implementation == "cuaev": | ||
self.model.aev_computer.use_cuda_extension = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't work
@@ -82,14 +83,26 @@ def addForces(self, | |||
if atoms is not None: | |||
includedAtoms = [includedAtoms[i] for i in atoms] | |||
atomic_numbers = [atom.element.atomic_number for atom in includedAtoms] | |||
species = torch.tensor(atomic_numbers).unsqueeze(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atomic_numbers
is a more meaningful name for this variable.
Also, it would be good to have a test to make sure things don't get broken. |
Is this ready to merge? |
I think, we still need some test to make sure the things keep working. |
What else has to be done before we can merge this? It would be really nice if we could get it in before building the OpenMM 8.0 beta, so we can get beta users to try out the full stack including optimized kernels. |
This is superseded by #35. |
These changes simplify the use of NNPOps OptimisedTorchANI in OpenMM simulations, allowing ANI2x to be used with ~6x speedup on my machine. It also allows periodic boundary conditions again for CUDA (see aiqm/torchani#607 (comment))
Thanks to @jchodera for his example code openmm/openmm-torch#59 (comment)
Addresses #14
Demonstration
https://github.com/meyresearch/ANI-Peptides/blob/main/demos/ANI_minimal.ipynb
Important to note that this PR lacks an option to choose between classic TorchANI and OptimisedTorchANI, instead enforcing NNPops as the only method. I'd be happy to implement a way to choose between the two, but I'm not sure how this would best be implemented to fit this library's structure?
eg.
or perhaps