You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some of the multi-locale features struggle in heavily-contended Slurm environments. I can't comment on all the launchers but here's a few things I ran into:
slurm-gasnet_ibv tries to salloc to run the _real jobs ... This presumes that a newly-enqueued job will run within an interactive timescale, which may not be true.
gasnet_ibv and gasnet_ucx launchers will try to srun if the wrapper program is run from within an allocation, which is fine if you have --exclusive on your salloc but may hang without insight if you are only using a non-exclusive portion of a node. (Particularly interactively like salloc srun --pty, as any requested resources get assigned to the pty and none remain to run Chapel's inner sruns.) The --oversubscribe flag to salloc and the --overlap flag to srun (and equivalent SLURM_ environment variables) can help here, but don't seem to be incorporated auto-magically.
gasnet_ibv and gasnet_ucx don't seem to consider the current SLURM job's memory when deciding on segment size. Rather, they seem to grab for all the physical memory on the node, which will sometimes trip an OOM in srun, but mostly cause silent SIGKILL, even with GASNet tracing enabled.
(1) is reasonably documented already. Basically don't use the slurm--prefixed launcher if there's not a reasonable chance you can hop right on your node(s), or use CHPL_LAUNCHER_USE_SBATCH to have it generate a batch for you. (I haven't tried the latter, as I am mixing both Chapel and non-Chapel workloads in the same batch)
(2) is a stumbling block for folks with lesser Slurm experience. Slurm's environment variables can get non---exclusive jobs working out of the box today. My preference would be to have the Chapel wrapper automatically apply overlap flags, but I can see a case to be made that this is in the scope of the user's responsibility, in which case some documentation might save folks like me time.
(3) Seems like a bug. The communications layer shouldn't be grabbing for more memory than the Slurm job has available. I don't know offhand whether Chapel or GASNet should enforce that. There is already some reference to GASNET_PHYSMEM_MAX in the infiniband documentation but it doesn't include a notion of having effective access to less than the whole node's RAM. IIRC when I tried passing it (but not exporting) it didn't pass from myProgram to myProgram_real, or else otherwise didn't effect the SIGKILL. However, manually setting GASNET_MAX_SEGSIZE to some value within my Slurm job's allocation did get me running again.
Edit: on a fresh build GASNET_PHYSMEM_MAX seems sufficient to prevent the SIGKILL
Trying to summarize some live debugging that happened over the last weeks on Gitter. Please correct any misunderstandings or misinterpretations on my part!
Steps to reproduce:
2) Try to launch a multi-locale program (even with -nL 1) within a non-exclusive Slurm interactive job, without oversubscribe or overlap flags. Doesn't seem to matter whether you use ssh, pmi, or mpi as the spawner
3) Try to run a multi-locale program (even with -nL 1) within a non-exclusive Slurm job, where the --mem Slurm flag is some fraction of the node's physical memory. Apply the GASNET_VERBOSEENV=1 environment variable and look at the value of GASNET_MAX_SEGSIZE
The text was updated successfully, but these errors were encountered:
Allocating an interactive Slurm job with salloc <args> --overcommit srun --overlap --pty bash seems sufficient to avoid having to do anything explicit with the Chapel w.r.t. (2) (i.e. no additional Slurm environment variables)
Summary of Feature
Some of the multi-locale features struggle in heavily-contended Slurm environments. I can't comment on all the launchers but here's a few things I ran into:
salloc
to run the_real
jobs ... This presumes that a newly-enqueued job will run within an interactive timescale, which may not be true.gasnet_ibv
andgasnet_ucx
launchers will try tosrun
if the wrapper program is run from within an allocation, which is fine if you have--exclusive
on yoursalloc
but may hang without insight if you are only using a non-exclusive portion of a node. (Particularly interactively likesalloc srun --pty
, as any requested resources get assigned to the pty and none remain to run Chapel's innersrun
s.) The--oversubscribe
flag tosalloc
and the--overlap
flag tosrun
(and equivalentSLURM_
environment variables) can help here, but don't seem to be incorporated auto-magically.gasnet_ibv
andgasnet_ucx
don't seem to consider the current SLURM job's memory when deciding on segment size. Rather, they seem to grab for all the physical memory on the node, which will sometimes trip an OOM insrun
, but mostly cause silent SIGKILL, even with GASNet tracing enabled.(1) is reasonably documented already. Basically don't use the
slurm-
-prefixed launcher if there's not a reasonable chance you can hop right on your node(s), or useCHPL_LAUNCHER_USE_SBATCH
to have it generate a batch for you. (I haven't tried the latter, as I am mixing both Chapel and non-Chapel workloads in the same batch)(2) is a stumbling block for folks with lesser Slurm experience. Slurm's environment variables can get non-
--exclusive
jobs working out of the box today. My preference would be to have the Chapel wrapper automatically apply overlap flags, but I can see a case to be made that this is in the scope of the user's responsibility, in which case some documentation might save folks like me time.(3) Seems like a bug. The communications layer shouldn't be grabbing for more memory than the Slurm job has available. I don't know offhand whether Chapel or GASNet should enforce that. There is already some reference to
GASNET_PHYSMEM_MAX
in the infiniband documentation but it doesn't include a notion of having effective access to less than the whole node's RAM.IIRC when I tried passing it (but not exporting) it didn't pass from myProgram to myProgram_real, or else otherwise didn't effect the SIGKILL. However, manually settingGASNET_MAX_SEGSIZE
to some value within my Slurm job's allocation did get me running again.Edit: on a fresh build
GASNET_PHYSMEM_MAX
seems sufficient to prevent the SIGKILLTrying to summarize some live debugging that happened over the last weeks on Gitter. Please correct any misunderstandings or misinterpretations on my part!
Steps to reproduce:
2) Try to launch a multi-locale program (even with
-nL 1
) within a non-exclusive Slurm interactive job, without oversubscribe or overlap flags. Doesn't seem to matter whether you use ssh, pmi, or mpi as the spawner3) Try to run a multi-locale program (even with
-nL 1
) within a non-exclusive Slurm job, where the--mem
Slurm flag is some fraction of the node's physical memory. Apply theGASNET_VERBOSEENV=1
environment variable and look at the value ofGASNET_MAX_SEGSIZE
The text was updated successfully, but these errors were encountered: