Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Better interaction / documentation with non-exclusive Slurm jobs #26788

Open
psath opened this issue Feb 26, 2025 · 2 comments

Comments

@psath
Copy link

psath commented Feb 26, 2025

Summary of Feature

Some of the multi-locale features struggle in heavily-contended Slurm environments. I can't comment on all the launchers but here's a few things I ran into:

  1. slurm-gasnet_ibv tries to salloc to run the _real jobs ... This presumes that a newly-enqueued job will run within an interactive timescale, which may not be true.
  2. gasnet_ibv and gasnet_ucx launchers will try to srun if the wrapper program is run from within an allocation, which is fine if you have --exclusive on your salloc but may hang without insight if you are only using a non-exclusive portion of a node. (Particularly interactively like salloc srun --pty, as any requested resources get assigned to the pty and none remain to run Chapel's inner sruns.) The --oversubscribe flag to salloc and the --overlap flag to srun (and equivalent SLURM_ environment variables) can help here, but don't seem to be incorporated auto-magically.
  3. gasnet_ibv and gasnet_ucx don't seem to consider the current SLURM job's memory when deciding on segment size. Rather, they seem to grab for all the physical memory on the node, which will sometimes trip an OOM in srun, but mostly cause silent SIGKILL, even with GASNet tracing enabled.

(1) is reasonably documented already. Basically don't use the slurm--prefixed launcher if there's not a reasonable chance you can hop right on your node(s), or use CHPL_LAUNCHER_USE_SBATCH to have it generate a batch for you. (I haven't tried the latter, as I am mixing both Chapel and non-Chapel workloads in the same batch)
(2) is a stumbling block for folks with lesser Slurm experience. Slurm's environment variables can get non---exclusive jobs working out of the box today. My preference would be to have the Chapel wrapper automatically apply overlap flags, but I can see a case to be made that this is in the scope of the user's responsibility, in which case some documentation might save folks like me time.
(3) Seems like a bug. The communications layer shouldn't be grabbing for more memory than the Slurm job has available. I don't know offhand whether Chapel or GASNet should enforce that. There is already some reference to GASNET_PHYSMEM_MAX in the infiniband documentation but it doesn't include a notion of having effective access to less than the whole node's RAM. IIRC when I tried passing it (but not exporting) it didn't pass from myProgram to myProgram_real, or else otherwise didn't effect the SIGKILL. However, manually setting GASNET_MAX_SEGSIZE to some value within my Slurm job's allocation did get me running again.
Edit: on a fresh build GASNET_PHYSMEM_MAX seems sufficient to prevent the SIGKILL

Trying to summarize some live debugging that happened over the last weeks on Gitter. Please correct any misunderstandings or misinterpretations on my part!

Steps to reproduce:
2) Try to launch a multi-locale program (even with -nL 1) within a non-exclusive Slurm interactive job, without oversubscribe or overlap flags. Doesn't seem to matter whether you use ssh, pmi, or mpi as the spawner
3) Try to run a multi-locale program (even with -nL 1) within a non-exclusive Slurm job, where the --mem Slurm flag is some fraction of the node's physical memory. Apply the GASNET_VERBOSEENV=1 environment variable and look at the value of GASNET_MAX_SEGSIZE

@psath
Copy link
Author

psath commented Feb 26, 2025

Allocating an interactive Slurm job with salloc <args> --overcommit srun --overlap --pty bash seems sufficient to avoid having to do anything explicit with the Chapel w.r.t. (2) (i.e. no additional Slurm environment variables)

@lydia-duncan
Copy link
Member

Thanks for filing, Paul! For those not following the Gitter, this was spawned based on some conversation there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants