Skip to content

Commit

Permalink
Merge pull request #1531 from pyiron/dependabot/pip/pympipool-0.9.1
Browse files Browse the repository at this point in the history
Bump pympipool from 0.8.5 to 0.9.1
  • Loading branch information
jan-janssen authored Jul 15, 2024
2 parents ffcf53e + c840407 commit 7041119
Show file tree
Hide file tree
Showing 10 changed files with 15 additions and 15 deletions.
2 changes: 1 addition & 1 deletion .ci_support/environment-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ dependencies:
- psutil =6.0.0
- pyfileindex =0.0.25
- pyiron_snippets =0.1.2
- pympipool =0.8.5
- executorlib =0.0.1
- pysqa =0.1.21
- pytables =3.9.2
- sqlalchemy =2.0.31
Expand Down
2 changes: 1 addition & 1 deletion .ci_support/environment-old.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ dependencies:
- psutil =5.8.0
- pyfileindex =0.0.16
- pyiron_snippets =0.1.1
- pympipool =0.8.0
- executorlib =0.0.1
- pysqa =0.1.12
- pytables =3.6.1
- sqlalchemy =2.0.22
Expand Down
2 changes: 1 addition & 1 deletion .ci_support/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ dependencies:
- psutil =6.0.0
- pyfileindex =0.0.25
- pyiron_snippets =0.1.2
- pympipool =0.8.5
- executorlib =0.0.1
- pysqa =0.1.21
- pytables =3.9.2
- sqlalchemy =2.0.31
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ simulation in `pyiron_atomistics`.
can be wrapped in `pyiron_base` to enable parameter studies with thousands or millions of calculation.
* The calculation can either be executed locally on the same computer or on high performance computing (HPC) resources.
The python simple queuing system adapter [pysqa](https://pysqa.readthedocs.io) is used to interface with the HPC
queuing systems directly from python and the [pympipool](https://pympipool.readthedocs.io) package is employed to
queuing systems directly from python and the [pympipool](https://executorlib.readthedocs.io) package is employed to
assign dedicated resources like multiple CPU cores and GPUs to individual python functions.
* Scientific data is efficiently stored using the [hierarchical data format (HDF)](https://www.hdfgroup.org) via the
[h5py](https://www.h5py.org) python library and more specifically the [h5io](https://github.com/h5io) packages to
Expand Down
2 changes: 1 addition & 1 deletion binder/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ dependencies:
- psutil =6.0.0
- pyfileindex =0.0.25
- pyiron_snippets =0.1.2
- pympipool =0.8.5
- executorlib =0.0.1
- pysqa =0.1.21
- pytables =3.9.2
- sqlalchemy =2.0.31
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ selected. Still it is important to mention, that assigning 120 CPU cores does no
function. Only by implementing internal parallelization inside the python functions with solutions like
[concurrent.futures.ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor)
it is possible to parallelize the execution of python functions on a single compute node. Finally, the pyiron developers
released the [pympipool](https://pympipool.readthedocs.io) to enable parallelization of python functions as well as the
released the [pympipool](https://executorlib.readthedocs.io) to enable parallelization of python functions as well as the
direct assignment of GPU resources inside a given queuing system allocation over multiple compute nodes using the
hierarchical queuing system [flux](https://flux-framework.org).

Expand Down
4 changes: 2 additions & 2 deletions pyiron_base/jobs/datamining.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ def create_table(self, file, job_status_list, executor=None, enforce_update=Fals
The executor, if given, must not naively pickle the mapped functions or
arguments, as PyironTable relies on lambda functions internally. Use
with executors that rely on dill or cloudpickle instead. Pyiron
provides such executors in the `pympipool` sub packages.
provides such executors in the `executorlib` sub packages.
Args:
file (FileHDFio): HDF were the previous state of the table is stored
Expand Down Expand Up @@ -785,7 +785,7 @@ def update_table(self, job_status_list=None):
self.project.db.item_update({"timestart": datetime.now()}, self.job_id)
with self.project_hdf5.open("input") as hdf5_input:
if self._executor_type is None and self.server.cores > 1:
self._executor_type = "pympipool.Executor"
self._executor_type = "executorlib.Executor"
if self._executor_type is not None:
with self._get_executor(max_workers=self.server.cores) as exe:
self._pyiron_table.create_table(
Expand Down
6 changes: 3 additions & 3 deletions pyiron_base/jobs/job/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -1620,15 +1620,15 @@ def _get_executor(self, max_workers=None):
"No executor type defined - Please set self.executor_type."
)
elif (
self._executor_type == "pympipool.Executor"
self._executor_type == "executorlib.Executor"
and platform.system() == "Darwin"
):
# The Mac firewall might prevent connections based on the network address - especially Github CI
return import_class(self._executor_type)(
max_cores=max_workers, hostname_localhost=True
)
elif self._executor_type == "pympipool.Executor":
# The pympipool Executor defines max_cores rather than max_workers
elif self._executor_type == "executorlib.Executor":
# The executorlib Executor defines max_cores rather than max_workers
return import_class(self._executor_type)(max_cores=max_workers)
elif isinstance(self._executor_type, str):
return import_class(self._executor_type)(max_workers=max_workers)
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[build-system]
requires = ["cloudpickle", "gitpython", "h5io", "h5py", "jinja2", "numpy", "pandas", "pint", "psutil", "pyfileindex", "pysqa", "sqlalchemy", "tables", "tqdm", "traitlets", "setuptools", "versioneer[toml]==0.29"]
requires = ["cloudpickle", "executorlib", "gitpython", "h5io", "h5py", "jinja2", "numpy", "pandas", "pint", "psutil", "pyfileindex", "pysqa", "sqlalchemy", "tables", "tqdm", "traitlets", "setuptools", "versioneer[toml]==0.29"]
build-backend = "setuptools.build_meta"

[project]
Expand All @@ -25,6 +25,7 @@ classifiers = [
]
dependencies = [
"cloudpickle==3.0.0",
"executorlib==0.0.1",
"gitpython==3.1.43",
"h5io_browser==0.0.15",
"h5py==3.11.0",
Expand All @@ -36,7 +37,6 @@ dependencies = [
"psutil==6.0.0",
"pyfileindex==0.0.25",
"pyiron_snippets==0.1.2",
"pympipool==0.8.5",
"pysqa==0.1.21",
"sqlalchemy==2.0.31",
"tables==3.9.2",
Expand Down
4 changes: 2 additions & 2 deletions tests/unit/table/test_datamining.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@


try:
import pympipool
import executorlib

skip_parallel_test = False
except ImportError:
Expand Down Expand Up @@ -73,7 +73,7 @@ def test_numpy_reload(self):

@unittest.skipIf(
skip_parallel_test,
"pympipool is not installed, so the pympipool based tests are skipped.",
"executorlib is not installed, so the executorlib based tests are skipped.",
)
class TestProjectDataParallel(TestWithProject):
@classmethod
Expand Down

0 comments on commit 7041119

Please sign in to comment.