Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pydantic2 and MBE procedure #432

Draft
wants to merge 16 commits into
base: master
Choose a base branch
from
Draft
18 changes: 11 additions & 7 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -103,10 +103,11 @@ jobs:
qcore --accept-license

# note: psi4 on c-f pins to a single qcel and qcng, so this may be handy for solve-and-replace
#- name: Special Config - QCElemental Dep
- name: Special Config - QCElemental Dep
# if: (matrix.cfg.label == 'ADCC')
# run: |
# conda remove qcelemental --force
run: |
conda remove qcelemental --force
python -m pip install git+https://github.com/loriab/QCElemental.git@v0.29.0.dev1 --no-deps
# python -m pip install qcelemental>=0.26.0 --no-deps

# note: conda remove --force, not mamba remove --force b/c https://github.com/mamba-org/mamba/issues/412
Expand All @@ -122,14 +123,17 @@ jobs:
run: |
sed -i s/from\ pydantic\ /from\ pydantic.v1\ /g ${CONDA_PREFIX}/lib/python${{ matrix.cfg.python-version }}/site-packages/psi4/driver/*py

- name: Install QCEngine
run: |
python -m pip install . --no-deps

- name: Environment Information
run: |
mamba info
mamba list

- name: Install QCEngine
run: |
python -m pip install . --no-deps
python -c "import qcelemental as q;print(q.__file__, q.__version__)"
python -c "import qcengine as q;print(q.__file__, q.__version__)"
git describe

- name: QCEngineRecords
run: |
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/adcc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.24.0
- pydantic=1
- pydantic>=2.1
- pydantic-settings
- msgpack-python

# Testing
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/base.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.12.0
- pydantic>=1.0.0
- pydantic>=2.1
- pydantic-settings

# Testing
- pytest
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/docs-cf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@ channels:
dependencies:
- python
- networkx
- pydantic=1
- pydantic>=2.1
- pydantic-settings
- numpy
- pint

Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/mrchem.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental>=0.24
- pydantic
- pydantic>=2.1
- pydantic-settings

# Testing
- pytest
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/nwchem.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.24.0
- pydantic>=1.0.0
- pydantic>=2.1
- pydantic-settings
- networkx>=2.4.0

# Testing
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/openmm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.11.1
- pydantic >=1.8.2
- pydantic>=2.1
- pydantic-settings
- pint <0.22

# Testing
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/opt-disp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.26.0
- pydantic>=1.0.0
- pydantic>=2.1
- pydantic-settings
- msgpack-python

# Testing
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/psi-nightly.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.26.0
- pydantic>=1.0.0
- pydantic>=2.1
- pydantic-settings
- msgpack-python

# Testing
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/psi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental=0.24.0
- pydantic=1.8.2 # test minimun stated version.
- pydantic>=2.1 # test minimum stated version.
- pydantic-settings
- msgpack-python

# Testing
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/qcore.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.24
- pydantic >=1.8.2
- pydantic>=2.1
- pydantic-settings
- tbb<2021

# Testing
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/rdkit.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.12.0
- pydantic>=1.0.0
- pydantic>=2.1
- pydantic-settings

# Testing
- pytest
Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/torchani.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.12.0
- pydantic>=1.0.0
- pydantic>=2.1
- pydantic-settings

- pytorch

Expand Down
3 changes: 2 additions & 1 deletion devtools/conda-envs/xtb.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ dependencies:
- py-cpuinfo
- psutil
- qcelemental >=0.11.1
- pydantic >=1.8.2
- pydantic>=2.1
- pydantic-settings

# Extras
- gcp-correction
Expand Down
7 changes: 2 additions & 5 deletions qcengine/compute.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,7 @@
from .util import compute_wrapper, environ_context, handle_output_metadata, model_wrapper

if TYPE_CHECKING:
try:
from pydantic.v1.main import BaseModel
except ImportError:
from pydantic.main import BaseModel
from pydantic.main import BaseModel
from qcelemental.models import AtomicResult


Expand All @@ -28,7 +25,7 @@ def _process_failure_and_return(model, return_dict, raise_error):
if raise_error:
raise InputError(model.error.error_message)
elif return_dict:
return model.dict()
return model.model_dump()
else:
return model
else:
Expand Down
44 changes: 20 additions & 24 deletions qcengine/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,8 @@
import socket
from typing import Any, Dict, Optional, Union

try:
import pydantic.v1 as pydantic
except ImportError:
import pydantic
from pydantic import BaseModel, ConfigDict, Field
from pydantic_settings import BaseSettings, SettingsConfigDict

from .extras import get_information

Expand Down Expand Up @@ -64,7 +62,7 @@ def get_global(key: Optional[str] = None) -> Union[str, Dict[str, Any]]:
return _global_values[key]


class NodeDescriptor(pydantic.BaseModel):
class NodeDescriptor(BaseModel):
"""
Description of an individual node
"""
Expand All @@ -78,7 +76,7 @@ class NodeDescriptor(pydantic.BaseModel):
memory_safety_factor: int = 10 # Percentage of memory as a safety factor

# Specifications
ncores: Optional[int] = pydantic.Field(
ncores: Optional[int] = Field(
None,
description="""Number of cores accessible to each task on this node

Expand All @@ -88,9 +86,9 @@ class NodeDescriptor(pydantic.BaseModel):
retries: int = 0

# Cluster options
is_batch_node: bool = pydantic.Field(
is_batch_node: bool = Field(
False,
help="""Whether the node running QCEngine is a batch node
description="""Whether the node running QCEngine is a batch node

Some clusters are configured such that tasks are launched from a special "batch" or "MOM" onto the compute nodes.
The compute nodes on such clusters often have a different CPU architecture than the batch nodes and
Expand All @@ -103,7 +101,7 @@ class NodeDescriptor(pydantic.BaseModel):
``mpiexec_command`` must always be used even for serial jobs (e.g., getting the version number)
""",
)
mpiexec_command: Optional[str] = pydantic.Field(
mpiexec_command: Optional[str] = Field(
None,
description="""Invocation for launching node-parallel tasks with MPI

Expand Down Expand Up @@ -140,31 +138,29 @@ def __init__(self, **data: Dict[str, Any]):
if "{ranks_per_node}" not in self.mpiexec_command:
raise ValueError("mpiexec_command must explicitly state the number of ranks per node")

class Config:
extra = "forbid"
model_config = ConfigDict(
extra="forbid",
)


class TaskConfig(pydantic.BaseSettings):
class TaskConfig(BaseSettings):
"""Description of the configuration used to launch a task."""

# Specifications
ncores: int = pydantic.Field(None, description="Number cores per task on each node")
nnodes: int = pydantic.Field(None, description="Number of nodes per task")
memory: float = pydantic.Field(
None, description="Amount of memory in GiB (2^30 bytes; not GB = 10^9 bytes) per node."
)
ncores: int = Field(None, description="Number cores per task on each node")
nnodes: int = Field(None, description="Number of nodes per task")
memory: float = Field(None, description="Amount of memory in GiB (2^30 bytes; not GB = 10^9 bytes) per node.")
scratch_directory: Optional[str] # What location to use as scratch
retries: int # Number of retries on random failures
mpiexec_command: Optional[str] # Command used to launch MPI tasks, see NodeDescriptor
use_mpiexec: bool = False # Whether it is necessary to use MPI to run an executable
cores_per_rank: int = pydantic.Field(1, description="Number of cores per MPI rank")
scratch_messy: bool = pydantic.Field(
False, description="Leave scratch directory and contents on disk after completion."
)
cores_per_rank: int = Field(1, description="Number of cores per MPI rank")
scratch_messy: bool = Field(False, description="Leave scratch directory and contents on disk after completion.")

class Config(pydantic.BaseSettings.Config):
extra = "forbid"
env_prefix = "QCENGINE_"
model_config = SettingsConfigDict(
extra="forbid",
env_prefix="QCENGINE_",
)


def _load_defaults() -> None:
Expand Down
2 changes: 2 additions & 0 deletions qcengine/procedures/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from .nwchem_opt import NWChemDriverProcedure
from .optking import OptKingProcedure
from .torsiondrive import TorsionDriveProcedure
from .manybody import ManyBodyProcedure
from .model import ProcedureHarness

__all__ = ["register_procedure", "get_procedure", "list_all_procedures", "list_available_procedures"]
Expand Down Expand Up @@ -71,3 +72,4 @@ def list_available_procedures() -> Set[str]:
register_procedure(BernyProcedure())
register_procedure(NWChemDriverProcedure())
register_procedure(TorsionDriveProcedure())
register_procedure(ManyBodyProcedure())
12 changes: 6 additions & 6 deletions qcengine/procedures/berny.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import sys
import traceback
from io import StringIO
from typing import Any, Dict, Union
from typing import Any, ClassVar, Dict, Union

import numpy as np
from qcelemental.models import OptimizationInput, OptimizationResult, FailedOperation
Expand All @@ -16,7 +16,7 @@


class BernyProcedure(ProcedureHarness):
_defaults = {"name": "Berny", "procedure": "optimization"}
_defaults: ClassVar[Dict[str, Any]] = {"name": "Berny", "procedure": "optimization"}

def found(self, raise_error: bool = False) -> bool:
return which_import(
Expand Down Expand Up @@ -54,11 +54,11 @@ def compute(
log.addHandler(logging.StreamHandler(log_stream))
log.setLevel("INFO")

input_data = input_data.dict()
input_data = input_data.model_dump()
geom_qcng = input_data["initial_molecule"]
comput = {**input_data["input_specification"], "molecule": geom_qcng}
program = input_data["keywords"].pop("program")
task_config = config.dict()
task_config = config.model_dump()
trajectory = []
output_data = input_data.copy()
try:
Expand All @@ -70,14 +70,14 @@ def compute(
geom_qcng["geometry"] = np.stack(geom_berny.coords * berny.angstrom)
ret = qcengine.compute(comput, program, task_config=task_config)
if ret.success:
trajectory.append(ret.dict())
trajectory.append(ret.model_dump())
opt.send((ret.properties.return_energy, ret.return_result))
else:
# qcengine.compute returned FailedOperation
raise UnknownError("Gradient computation failed")

except UnknownError:
error = ret.error.dict() # ComputeError
error = ret.error.model_dump() # ComputeError
except Exception:
error = {"error_type": "unknown", "error_message": f"Berny error:\n{traceback.format_exc()}"}
else:
Expand Down
11 changes: 4 additions & 7 deletions qcengine/procedures/geometric.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from typing import Any, Dict, Union
from typing import Any, ClassVar, Dict, Union

from qcelemental.models import OptimizationInput, OptimizationResult
from qcelemental.util import safe_version, which_import
Expand All @@ -8,13 +8,10 @@

class GeometricProcedure(ProcedureHarness):

_defaults = {"name": "geomeTRIC", "procedure": "optimization"}
_defaults: ClassVar[Dict[str, Any]] = {"name": "geomeTRIC", "procedure": "optimization"}

version_cache: Dict[str, str] = {}

class Config(ProcedureHarness.Config):
pass

def found(self, raise_error: bool = False) -> bool:
return which_import(
"geometric",
Expand Down Expand Up @@ -43,13 +40,13 @@ def compute(self, input_model: "OptimizationInput", config: "TaskConfig") -> "Op
except ModuleNotFoundError:
raise ModuleNotFoundError("Could not find geomeTRIC in the Python path.")

input_data = input_model.dict()
input_data = input_model.model_dump()

# Temporary patch for geomeTRIC
input_data["initial_molecule"]["symbols"] = list(input_data["initial_molecule"]["symbols"])

# Set retries to two if zero while respecting local_config
local_config = config.dict()
local_config = config.model_dump()
local_config["retries"] = local_config.get("retries", 2) or 2
input_data["input_specification"]["extras"]["_qcengine_local_config"] = local_config

Expand Down
Loading