You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I follow the tutorial the first part of sparse_gp_tutorial.ipynb training an offline sparse gaussian process.
Once I increased the number of training_size and validation_size, an error MemoryError: std::bad_alloc will occured during the following step.
I also use my own data which contains 274 atoms and 9000 structures to train the model. And it also failed.
# Calculate descriptors of the validation and training structures.print("Computing descriptors of validation points...")
validation_strucs= []
validation_forces=np.zeros((validation_size, noa, 3))
forn, snapshotinenumerate(validation_pts):
pos=positions[snapshot]
frcs=forces[snapshot]
# Create structure object, which computes and stores descriptors.struc= \
Structure(cell, coded_species, pos, cutoff, descriptors)
validation_strucs.append(struc)
validation_forces[n] =frcsprint("Done.")
print("Computing descriptors of training points...")
training_strucs= []
training_forces=np.zeros((training_size, noa, 3))
forn, snapshotinenumerate(training_pts):
pos=positions[snapshot]
frcs=forces[snapshot]
# Create structure object, which computes and stores descriptors.struc= \
Structure(cell, coded_species, pos, cutoff, descriptors)
# Assign force labels to the training structure.struc.forces=frcs.reshape(-1)
training_strucs.append(struc)
training_forces[n] =frcsprint("Done.")
What I want to do is just using my own data (the data is large including 9000 structures and 274 atoms) to train an offline model, not on-the-fly.
I will appreciate it if there is a way to figure it out!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I follow the tutorial the first part of
sparse_gp_tutorial.ipynb
training an offline sparse gaussian process.Once I increased the number of
training_size
andvalidation_size
, an errorMemoryError: std::bad_alloc
will occured during the following step.I also use my own data which contains 274 atoms and 9000 structures to train the model. And it also failed.
What I want to do is just using my own data (the data is large including 9000 structures and 274 atoms) to train an offline model, not on-the-fly.
I will appreciate it if there is a way to figure it out!
Beta Was this translation helpful? Give feedback.
All reactions