diff --git a/README.md b/README.md index 209c480f..20f41028 100644 --- a/README.md +++ b/README.md @@ -87,7 +87,7 @@ with SlurmClusterExecutor() as exe: ``` In this case the [Python simple queuing system adapter (pysqa)](https://pysqa.readthedocs.io) is used to submit the `calc()` function to the [SLURM](https://slurm.schedmd.com) job scheduler and request an allocation with two CPU cores -for the execution of the function - [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-submission.html). In the background the [sbatch](https://slurm.schedmd.com/sbatch.html) +for the execution of the function - [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html). In the background the [sbatch](https://slurm.schedmd.com/sbatch.html) command is used to request the allocation to execute the Python function. Within a given [SLURM](https://slurm.schedmd.com) job executorlib can also be used to assign a subset of the diff --git a/docs/installation.md b/docs/installation.md index 76d76689..ae6e8fdf 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -68,8 +68,8 @@ documentation covers the [installation of pysqa](https://pysqa.readthedocs.io/en detail. ## HPC Job Executor -For optimal performance in [HPC Allocation Mode](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html) the -[flux framework](https://flux-framework.org) is recommended as job scheduler. Even when the [Simple Linux Utility for Resource Management (SLURM)](https://slurm.schedmd.com) +For optimal performance the [HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html) leverages the +[flux framework](https://flux-framework.org) as its recommended job scheduler. Even when the [Simple Linux Utility for Resource Management (SLURM)](https://slurm.schedmd.com) or any other job scheduler is already installed on the HPC cluster. [flux framework](https://flux-framework.org) can be installed as a secondary job scheduler to leverage [flux framework](https://flux-framework.org) for the distribution of resources within a given allocation of the primary scheduler. diff --git a/docs/trouble_shooting.md b/docs/trouble_shooting.md index 0e406d27..61179ac4 100644 --- a/docs/trouble_shooting.md +++ b/docs/trouble_shooting.md @@ -20,9 +20,10 @@ dependency. The installation of this and other optional dependencies is covered ## Missing Dependencies The default installation of executorlib only comes with a limited number of dependencies, especially the [zero message queue](https://zeromq.org) -and [cloudpickle](https://github.com/cloudpipe/cloudpickle). Additional features like [caching](https://executorlib.readthedocs.io/en/latest/installation.html#caching), [HPC submission mode](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-cluster-executor) -and [HPC allocation mode](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-job-executor) require additional dependencies. The dependencies are explained in more detail in the -[installation section](https://executorlib.readthedocs.io/en/latest/installation.html#). +and [cloudpickle](https://github.com/cloudpipe/cloudpickle). Additional features like [caching](https://executorlib.readthedocs.io/en/latest/installation.html#caching), the [HPC Cluster Executors](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-cluster-executor) +and the [HPC Job Executors](https://executorlib.readthedocs.io/en/latest/installation.html#hpc-job-executor) require +additional dependencies. The dependencies are explained in more detail in the +[installation section](https://executorlib.readthedocs.io/en/latest/installation.html). ## Python Version Executorlib supports all current Python version ranging from 3.9 to 3.13. Still some of the dependencies and especially diff --git a/notebooks/3-hpc-job.ipynb b/notebooks/3-hpc-job.ipynb index 4995aebc..486405d8 100644 --- a/notebooks/3-hpc-job.ipynb +++ b/notebooks/3-hpc-job.ipynb @@ -124,7 +124,7 @@ "metadata": {}, "source": [ "### Block Allocation\n", - "The block allocation for the HPC allocation mode follows the same implementation as the [block allocation for the local mode](https://executorlib.readthedocs.io/en/latest/1-local.html#block-allocation). It starts by defining the initialization function `init_function()` which returns a dictionary which is internally used to look up input parameters for Python functions submitted to the `FluxJobExecutor` class. Commonly this functionality is used to store large data objects inside the Python process created for the block allocation, rather than reloading these Python objects for each submitted function." + "The block allocation for the HPC allocation mode follows the same implementation as the [block allocation for the Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html#block-allocation). It starts by defining the initialization function `init_function()` which returns a dictionary which is internally used to look up input parameters for Python functions submitted to the `FluxJobExecutor` class. Commonly this functionality is used to store large data objects inside the Python process created for the block allocation, rather than reloading these Python objects for each submitted function." ] }, {