Replies: 2 comments 3 replies
-
With RAM (14 TB) massively unlikely, unless your dataset is hugely malformed and causing some algorithm to go haywire. Are you sure about that 14TB number? That's a $200k+ machine. Perhaps you mean 14GB?
There would be logs associated with this, check with your sysadmins regarding what happened in the kernel messages, "Out of Memory OOM" will appear. |
Beta Was this translation helpful? Give feedback.
-
I believe RABIES processing is resumab le via the functions provided by nipype. |
Beta Was this translation helpful? Give feedback.
-
Dear experts,
Our shared server has crashed (more on that below) while I was preprocessing an extensive dataset (12 minutes of resting state for a large number of subjects with sometimes multiple measurements) using docker. As I am using the fast_commonspace option, I was wondering whether it would be possible to somehow just restart processing from the last successful step and still getting the outputs and diagnosis reports for my whole data set all in one place, instead of having to rerun everything?
Furthermore, I would be grateful for assistance regarding the parallelization option -p MultiProc and --scale_min_memory:
I have been using the following parameters:
The last entry in the log file is
after which it stops.
I have been told the virtual server crashed due to the availble RAM (1,4 TB) being fully used. As we are currently trying to locate/understand the issue: Is it plausible/possible that memory has constantly been newly allocated to the next preprocessing steps and memory from completed steps has not been freed?
Do you have any tips or suggestions regarding these parameters?
Thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions