You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But if we evaluate many pipelines on the same dataset, the chances of failing before the end of the evaluation are high. And if we fail all the results for the dataset are lost...
Would it be possible to save the results as soon as they are computed (i.e. after every fold of every session of every subject)?
Potential issues I see:
parallel access to the hdf5 file
computational overhead due to accessing the hdf5 more often
What do you think?
The text was updated successfully, but these errors were encountered:
I love this idea, it is fundamental, but I don't have much experience in how to do and change the hd5f. Can you open a PR, and we discuss it with the code?
As I understand, rn we save the results to the hdf5 file when the evaluation of a whole dataset is over, and then we continue with the next dataset:
moabb/moabb/evaluations/base.py
Line 165 in a9f2e4c
But if we evaluate many pipelines on the same dataset, the chances of failing before the end of the evaluation are high. And if we fail all the results for the dataset are lost...
Would it be possible to save the results as soon as they are computed (i.e. after every fold of every session of every subject)?
Potential issues I see:
What do you think?
The text was updated successfully, but these errors were encountered: