Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove unused check_parallel_system function #81

Merged
merged 1 commit into from
Jun 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/developers_guide/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,7 +297,6 @@ ocean/api
:toctree: generated/

get_available_parallel_resources
check_parallel_system
set_cores_per_node
run_command
get_parallel_command
Expand Down
29 changes: 0 additions & 29 deletions polaris/parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,35 +69,6 @@ def get_available_parallel_resources(config):
return available_resources


def check_parallel_system(config):
"""
Check whether we are in an appropriate state for the given queuing system.
For systems with Slurm, this means that we need to have an interactive
or batch job on a compute node, as determined by the ``$SLURM_JOB_ID``
environment variable.

Parameters
----------
config : polaris.config.PolarisConfigParser
Configuration options

Raises
-------
ValueError
If using Slurm and not on a compute node
"""

parallel_system = config.get('parallel', 'system')
if parallel_system == 'slurm':
if 'SLURM_JOB_ID' not in os.environ:
raise ValueError('SLURM_JOB_ID not defined. You are likely not '
'on a compute node.')
elif parallel_system == 'single_node':
pass
else:
raise ValueError(f'Unexpected parallel system: {parallel_system}')


def set_cores_per_node(config, cores_per_node):
"""
If the system has Slurm, find out the ``cpus_per_node`` and set the config
Expand Down