Replies: 3 comments 1 reply
-
Certain nipype nodes have a minimum memory allocation for running the node, and this prevents running too many operations simultaneously if not enough memory is available. --scale_min_memory is a multiplier that will increase or decrease this memory requirement (1.0 means the same as default, 0.5 is half from default meaning operations need less memory, and 2.0 would be double the memory requirement). So if you increase the multiplier it could mitigate memory errors if that is the issue. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the response
Errors due to the node's cap on memory or computer's ram? In this scenario, does this mean that node has reached its requirement limit despite more memory available on the computer system? Let's say the node minimum memory requirement is 10 GB? If I use --scale_min_memory 2,does this mean 20 GB is available for processing instead of 10 GB? -g |
Beta Was this translation helpful? Give feedback.
-
Thanks, Gabe. Yes, I understand now. This helps a lot. Thanks again. -g |
Beta Was this translation helpful? Give feedback.
-
If I use --local_threads 8 and --scale_min_memory 2 for preprocessing, what does the number 2 indicate (the default value is 1)?
Is it placing some type of cap on memory usage if the memory exceeds a threshold limit while running 8 threads concurrently?
Thanks.
-g
Beta Was this translation helpful? Give feedback.
All reactions