diff --git a/docs/trouble_shooting.md b/docs/trouble_shooting.md index ca868bcd..ec4e080f 100644 --- a/docs/trouble_shooting.md +++ b/docs/trouble_shooting.md @@ -31,9 +31,9 @@ performance computing installations Python 3.12 is the recommended Python verion ## Resource Dictionary The resource dictionary parameter `resource_dict` can contain one or more of the following options: -* `cores_per_worker` (int): number of MPI cores to be used for each function call +* `cores` (int): number of MPI cores to be used for each function call * `threads_per_core` (int): number of OpenMP threads to be used for each function call -* `gpus_per_worker` (int): number of GPUs per worker - defaults to 0 +* `gpus_per_core` (int): number of GPUs per worker - defaults to 0 * `cwd` (str/None): current working directory where the parallel python task is executed * `openmpi_oversubscribe` (bool): adds the `--oversubscribe` command line flag (OpenMPI and SLURM only) - default False * `slurm_cmd_args` (list): Additional command line arguments for the srun call (SLURM only) @@ -54,4 +54,4 @@ high performance computing (HPC) clusters via SSH, this functionality is not sup is the use of [cloudpickle](https://github.com/cloudpipe/cloudpickle) for serialization inside executorlib, this requires the same Python version and dependencies on both computer connected via SSH. As tracking those parameters is rather complicated the SSH connection functionality of [pysqa](https://pysqa.readthedocs.io) is not officially supported in -executorlib. \ No newline at end of file +executorlib. diff --git a/executorlib/__init__.py b/executorlib/__init__.py index 5e9814db..70904fe6 100644 --- a/executorlib/__init__.py +++ b/executorlib/__init__.py @@ -35,9 +35,9 @@ class Executor: cache_directory (str, optional): The directory to store cache files. Defaults to "cache". max_cores (int): defines the number cores which can be used in parallel resource_dict (dict): A dictionary of resources required by the task. With the following keys: - - cores_per_worker (int): number of MPI cores to be used for each function call + - cores (int): number of MPI cores to be used for each function call - threads_per_core (int): number of OpenMP threads to be used for each function call - - gpus_per_worker (int): number of GPUs per worker - defaults to 0 + - gpus_per_core (int): number of GPUs per worker - defaults to 0 - cwd (str/None): current working directory where the parallel python task is executed - openmpi_oversubscribe (bool): adds the `--oversubscribe` command line flag (OpenMPI and SLURM only) - default False