You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Optimizations for how cross section data, tally meshes, and cells are stored. Right now with the np.ndarry the size of allocated memory for each item is that of the largest array and the same for all others. numba.jitclass is a remedy for this but is not GPU operable. Some initial ideas are
numba.jitclass with a puller function for GPU runs to nd.arrays
An offset scheme where data is stored as a single dim array with offsets
Use of other c-type data structures like pytorch or cupy
The text was updated successfully, but these errors were encountered:
@jpmorgan98 Per our discussion with Braxton yesterday, it sounds like it is easier for the GPU-Numba work, if we stick with the numpy structured array than if we move to use Jitclass, at least for now, perhaps until the first working GPU merge. But what do you think, @braxtoncuneo ?
Optimizations for how cross section data, tally meshes, and cells are stored. Right now with the
np.ndarry
the size of allocated memory for each item is that of the largest array and the same for all others.numba.jitclass
is a remedy for this but is not GPU operable. Some initial ideas arenumba.jitclass
with a puller function for GPU runs to nd.arrayspytorch
orcupy
The text was updated successfully, but these errors were encountered: