-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
find out mem per node on RML #99
Comments
Thank you @andrew-niaid - helpfully answering here. |
A single GPU node: (There's 8 of these nodes) NodeName=ai-rmlgpu08 Arch=x86_64 CoresPerSocket=8 |
For completeness these nodes are available to you (and EM) ai-rmlcpu01-16: 16 Cores, 256GB Memory, No GPU On BigSky, if you don't specify a partition in slurm, you get the default partition called 'int' (for interactive). Nodes can be available in multiple partitions. The default 'int' partition contains nodes ai-rmlcpu01-28 We've started putting GPUs in all new nodes, which is why there are some in nodes with an 'ai-rmlcpu##' name. In slurm, the most correct way to see if a node has a GPU is to look at the 'Gres=' line in the 'scontrol show node' output like you have above. Gres is 'Generic RESource', a generic allocatable resource. |
for Brad;s work -
The text was updated successfully, but these errors were encountered: