Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc: Update the developer guide for the v3 #3376

Merged
merged 15 commits into from
Mar 2, 2024
119 changes: 117 additions & 2 deletions doc/development/create-a-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,10 @@ To incorporate your custom model you'll need to:

## Design a new component

When creating a new component, take descriptor as the example, you should inherit {py:class}`deepmd.tf.descriptor.descriptor.Descriptor` class and override several methods. Abstract methods such as {py:class}`deepmd.tf.descriptor.descriptor.Descriptor.build` must be implemented and others are not. You should keep arguments of these methods unchanged.
::::{tab-set}

:::{tab-item} TensorFlow {{ tensorflow_icon }}
When creating a new component, take descriptor as the example, you should inherit from {py:class}`deepmd.tf.descriptor.descriptor.Descriptor` class and override several methods. Abstract methods such as {py:class}`deepmd.tf.descriptor.descriptor.Descriptor.build` must be implemented and others are not. You should keep arguments of these methods unchanged.

After implementation, you need to register the component with a key:
```py
Expand All @@ -22,7 +25,119 @@ class SomeDescript(Descriptor):
def __init__(self, arg1: bool, arg2: float) -> None:
pass
```
:::

:::{tab-item} PyTorch {{ pytorch_icon }}
The PyTorch backend follows a meticulously structured inheritance pattern. When creating a new descriptor, it is essential to inherit from both the {py:class}`deepmd.pt.model.descriptor.base_descriptor.BaseDescriptor` class and the {py:class}`torch.nn.Module` class. Abstract methods, including {py:class}`deepmd.pt.model.descriptor.base_descriptor.BaseDescriptor.fwd`, must be implemented, while others remain optional. It is crucial to adhere to the original method arguments without any modifications. Once the implementation is complete, the next step involves registering the component with a designated key:
anyangml marked this conversation as resolved.
Show resolved Hide resolved

```py
from deepmd.pt.model.descriptor.base_descriptor import (
BaseDescriptor,
)


@BaseDescriptor.register("some_descrpt")
class SomeDescript(BaseDescriptor, torch.nn.Module):
def __init__(self, arg1: bool, arg2: float) -> None:
pass

def forward(
self,
coord_ext: torch.Tensor,
atype_ext: torch.Tensor,
nlist: torch.Tensor,
mapping: Optional[torch.Tensor] = None,
):
pass

def serialize(self) -> dict:
pass

@classmethod
def deserialize(cls, data: dict) -> "SomeDescript":
pass
```

The serialize and deserialize methods are important for cross-backend model conversion.


In many instances, there is no requirement to create a new fitting net. For fitting user-defined scalar properties, the {py:class}`deepmd.pt.model.task.ener.InvarFitting` class can be utilized. However, if there is a need for a new fitting net, one should inherit from both the {py:class}`deepmd.pt.model.task.base_fitting.BaseFitting` class and the {py:class}`torch.nn.Module` class. Alternatively, for a more straightforward approach, inheritance from the {py:class}`deepmd.pt.model.task.fitting.GeneralFitting` class is also an option.


```py
from deepmd.pt.model.task.fitting import (
GeneralFitting,
)
from deepmd.dpmodel import (
FittingOutputDef,
fitting_check_output,
)


@GeneralFitting.register("some_fitting")
@fitting_check_output
class SomeFittingNet(GeneralFitting):
def __init__(self, arg1: bool, arg2: float) -> None:
pass

def forward(
self,
descriptor: torch.Tensor,
atype: torch.Tensor,
gr: Optional[torch.Tensor] = None,
g2: Optional[torch.Tensor] = None,
h2: Optional[torch.Tensor] = None,
fparam: Optional[torch.Tensor] = None,
aparam: Optional[torch.Tensor] = None,
):
pass

def output_def(self) -> FittingOutputDef:
pass
```

The model architecture within the PyTorch backend is structured with multiple layers of abstraction to provide a high degree of flexibility. Generally, the process begins with an atomic model responsible for handling atom-wise property calculations. This atomic model should inherit from the {py:class}`deepmd.pt.model.atomic_model.base_atomic_model.BaseAtomicModel` class and the {py:class}`torch.nn.Module` class.

Subsequently, the `AtomicModel` is encapsulated using the `make_model(AtomicModel)` function, employing the `deepmd.pt.model.model.make_model.make_model` function. The purpose of the `make_model` wrapper is to facilitate the translation between the original system and the extended system.
anyangml marked this conversation as resolved.
Show resolved Hide resolved

Finally, the entire model is wrapped within a `DPModel`, which must inherit from the {py:class}`deepmd.pt.model.model.model.BaseModel` class and include the aforementioned `make_model(AtomicModel)`. The user directly interacts with a wrapper built on top of the `DPModel` to seamlessly handle result translations.
anyangml marked this conversation as resolved.
Show resolved Hide resolved

```py
from deepmd.pt.model.atomic_model.base_atomic_model import (
BaseAtomicModel,
)
from deepmd.pt.model.model.make_model import (
make_model,
)
from deepmd.pt.model.model.model import (
BaseModel,
)


class SomeAtomicModel(BaseAtomicModel, torch.nn.Module):
def __init__(self, arg1: bool, arg2: float) -> None:
pass

def forward_atomic(self):
pass


@BaseModel.register("some_model")
class SomeDPModel(make_model(SomeAtomicModel), BaseModel):
pass


class SomeModel(SomeDPModel):
anyangml marked this conversation as resolved.
Show resolved Hide resolved
pass
```

:::
:::{tab-item} DPModel {{ dpmodel_icon }}

The DPModel backend is implemented using pure NumPy, serving as a reference backend for maintaining consistency in tests. The design pattern closely mirrors that of the PyTorch backend, and developers can refer to the PyTorch development guide for detailed instructions.

:::
::::
## Register new arguments

To let someone uses your new component in their input file, you need to create a new method that returns some `Argument` of your new component, and then register new arguments. For example, the code below
Expand All @@ -31,7 +146,7 @@ To let someone uses your new component in their input file, you need to create a
from typing import List

from dargs import Argument
from deepmd.tf.utils.argcheck import descrpt_args_plugin
from deepmd.utils.argcheck import descrpt_args_plugin


@descrpt_args_plugin.register("some_descrpt")
Expand Down