-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU error with custom pairwise interactions #134
Comments
Thanks for reporting this, I'm not sure I've ever used 3 interactions on the GPU and it looks like that's the issue. One workaround might be to keep the loop but loop I'll have a more thorough look next week. |
Thank you for looking into it! I would like to let you know that the reason for the reappearance of the error after modifying the force evaluation was because of the energy calculation (probably because of the same reason). On pe = potential_energy(inters[1], dr, coord_i, coord_j, atoms[i], atoms[j],
boundary, special)
for inter in inters[2:end]
pe += potential_energy(inter, dr, coord_i, coord_j, atoms[i], atoms[j],
boundary, special)
end Making the changes on both the force and energy calculations now makes the errors completely disappear for any combination of three or more pairwise potentials. Regarding the loop over 2:N, I tried but found the error reappeared. I will also try to figure out the reason. |
In particular this occurs when different interactions are used, using three of the same interactions is okay. This works: f = force_gpu(inters[1], dr, coord_i, coord_j, atoms[i], atoms[j], boundary, special)
f += force_gpu(inters[2], dr, coord_i, coord_j, atoms[i], atoms[j], boundary, special)
f += force_gpu(inters[3], dr, coord_i, coord_j, atoms[i], atoms[j], boundary, special) But this errors: f = force_gpu(inters[1], dr, coord_i, coord_j, atoms[i], atoms[j], boundary, special)
for inter_i in 2:3
f += force_gpu(inters[inter_i], dr, coord_i, coord_j, atoms[i], atoms[j], boundary, special)
end Extracting it out to a function also errors. A solution would be to auto-generate the code at the top. I tried metaprogramming for this but couldn't get it to work, I'm not the strongest at that though. I got One workaround is to define functions like: function addforces(inters::Tuple{<:Any, <:Any, <:Any}, dr, coord_i, coord_j, atom_i, atom_j, boundary, special)
return force_gpu(inters[1], dr, coord_i, coord_j, atom_i, atom_j, boundary, special) +
force_gpu(inters[2], dr, coord_i, coord_j, atom_i, atom_j, boundary, special) +
force_gpu(inters[3], dr, coord_i, coord_j, atom_i, atom_j, boundary, special)
end This works but would require defining multiple functions. Maybe the function definitions could be written with metaprogramming. I wonder if @vchuravy has any ideas as to why the first example in this comment works in a CUDA kernel and the second doesn't, or if there is an easy workaround. MWE, for reference: using Molly, CUDA
boundary = CubicBoundary(1.0u"nm")
coords = CuArray(place_atoms(100, boundary; min_dist=0.1u"nm"))
atoms = CuArray([Atom(σ=0.02u"nm", ϵ=0.1u"kJ * mol^-1") for _ in 1:100])
nf = DistanceNeighborFinder(eligible=CuArray(trues(100, 100)), dist_cutoff=0.2u"nm")
lj = LennardJones(use_neighbors=true)
coul = Coulomb(use_neighbors=true)
ss = SoftSphere(use_neighbors=true)
sys2 = System(coords=coords, atoms=atoms, boundary=boundary, neighbor_finder=nf, pairwise_inters=(lj, coul,))
sys3 = System(coords=coords, atoms=atoms, boundary=boundary, neighbor_finder=nf, pairwise_inters=(lj, coul, ss))
neighbors = find_neighbors(sys2)
forces(sys2, neighbors) # Works
forces(sys3, neighbors) # Errors |
What is But the likely solution is something like:
Important is that |
Thanks Valentin. Indeed the force case seems to work with permutations of the above and f_tuple = ntuple(length(inters)) do inter_type_i
force_gpu(inters[inter_type_i], dr, coord_i, coord_j, atom_i, atom_j, boundary, special)
end
f = sum(f_tuple)
Strangely, though, the potential energy case pe_tuple = ntuple(length(inters)) do inter_type_i
potential_energy(inters[inter_type_i], dr, coord_i, coord_j, atom_i, atom_j, boundary, special)
end
pe = sum(pe_tuple) works for tuples of length 2 but fails on tuples of length 3 with a similar error:
Any idea what is going on there? |
Hi! I have implemented some custom pairwise interactions that use the
DistanceNeighborFinder
. The implementations work on the CPU. I then tried to use the interactions on the GPU. I followed the examples on the Molly documentation, casting the relevant arrays as CuArray and ensuring all the interactions evaluated to true when using theisbitstype
function. However, I get an error that is related to the CUDA.jl package:The error only appears when I use three (or more) pairwise interactions together:
The CUDA kernel is compiled successfully if I use any pair of interactions (interaction1 + interaction2, interaction1 + interaction3, etc.). By looking at the stack trace, I tracked down the error back to these lines 36-39 of the
src/cuda.jl
file:I still need to familiarize myself more with the CUDA.jl package, but maybe the instructions generated when using tuples of pairwise interactions are not supported yet for generating CUDA kernels. I tried different versions of the same lines of code, hoping to find an alternative representation that is compatible with CUDA.jl, and found that the following way
makes the error disappear:By introducing this change, the CI passes all the tests successfully. I have not yet checked the performance implications, but I can try if there are input files for benchmarking the GPU code.Finally, I can submit a PR if this change looks ok.UPDATE: I found that using other combinations of pairwise interactions makes the error reappear. I will further investigate the cause...
The text was updated successfully, but these errors were encountered: