Releases: deepmodeling/deepmd-kit
v3.0.0a0
DeePMD-kit v3: A multiple-backend framework for deep potentials
We are excited to announce the first alpha version of DeePMD-kit v3. DeePMD-kit v3 allows you to train and run deep potential models on top of TensorFlow or PyTorch. DeePMD-kit v3 also supports the DPA-2 model, a novel architecture for large atomic models.
Highlights
Multiple-backend framework
DeePMD-kit v3 adds a pluggable multiple-backend framework to provide consistent training and inference experiences between different backends. You can:
- Use the same training data and the input script to train a deep potential model with different backends. Switch backends based on efficiency, functionality, or convenience:
# Training a model using the TensorFlow backend
dp --tf train input.json
dp --tf freeze
# Training a mode using the PyTorch backend
dp --pt train input.json
dp --pt freeze
- Use any model to perform inference via any existing interfaces, including
dp test
, Python/C++/C interface, and third-party packages (dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc). Take an example on LAMMPS:
# run LAMMPS with a TensorFlow backend model
pair_style deepmd frozen_model.pb
# run LAMMPS with a PyTorch backend model
pair_style deepmd frozen_model.pth
# Calculate model deviation using both models
pair_style deepmd frozen_model.pb frozen_model.pth out_file md.out out_freq 100
- Convert models between backends, using
dp convert-backend
, if both backends support a model:
dp convert-backend frozen_model.pb frozen_model.pth
dp convert-backend frozen_model.pth frozen_model.pb
- Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.
PyTorch backend: a backend designed for large atomic models and new research
We added the PyTorch backend in DeePMD-kit v3 to support the development of new models, especially for large atomic models.
DPA-2 model: Towards a universal large atomic model for molecular and material simulation
DPA-2 model is a novel architecture for Large Atomic Model (LAM) and can accurately represent a diverse range of chemical systems and materials, enabling high-quality simulations and predictions with significantly reduced efforts compared to traditional methods. The DPA-2 model is only implemented in the PyTorch backend. An example configuration is in the examples/water/dpa2
directory.
The DPA-2 descriptor includes two primary components: repinit
and repformer
. The detailed architecture is shown in the following figure.
Training strategies for large atomic models
The PyTorch backend has supported multiple training strategies to develop large atomic models.
Parallel training: Large atomic models have a number of hyper-parameters and complex architecture, so training a model on multiple GPUs is necessary. Benefiting from the PyTorch community ecosystem, the parallel training for the PyTorch backend can be driven by torchrun
, a launcher for distributed data parallel.
torchrun --nproc_per_node=4 --no-python dp --pt train input.json
Multi-task training: Large atomic models are trained against data in a wide scope and at different DFT levels, which requires multi-task training. The PyTorch backend supports multi-task training, sharing the descriptor between different An example is given in examples/water_multi_task/pytorch_example/input_torch.json
.
Finetune: Fine-tune is useful to train a pre-train large model on a smaller, task-specific dataset. The PyTorch backend has supported --finetune
argument in the dp --pt train
command line.
Developing new models using Python and dynamic graphs
Researchers may feel pain about the static graph and the custom C++ OPs from the TensorFlow backend, which sacrifices research convenience for computational performance. The PyTorch backend has a well-designed code structure written using the dynamic graph, which is currently 100% written with the Python language, making extending and debugging new deep potential models easier than the static graph.
Supporting traditional deep potential models
People may still want to use the traditional models already supported by the TensorFlow backend in the PyTorch backend and compare the same model among different backends. We almost rewrote all of the traditional models in the PyTorch backend, which are listed below:
- Features supported:
- Descriptor:
se_e2_a
,se_e2_r
,se_atten
,hybrid
; - Fitting: energy, dipole, polar, fparam/apram support
- Model:
standard
, DPRc - Python inference interface
- C++ inference interface for energy only
- TensorBoard
- Descriptor:
- Features not supported yet:
- Descriptor:
se_e3
,se_atten_v2
,se_e2_a_mask
- Fitting:
dos
- Model:
linear_ener
, DPLR,pairtab
,linear_ener
,frozen
,pairwise_dprc
, ZBL, Spin - Model compression
- Python inference interface for DPLR
- C++ inference interface for tensors and DPLR
- Paralleling training using Horovod
- Descriptor:
- Features not planned:
- Descriptor:
loc_frame
,se_e2_a
+ type embedding,se_a_ebd_v2
- NVNMD
- Descriptor:
Warning
As part of an alpha release, the PyTorch backend's API or user input arguments may change before the first stable version.
DP backend and format: reference backend for other backends
DP is a reference backend for development that uses pure NumPy to implement models without using any heavy deep-learning frameworks. It cannot be used for training but only for Python inference. As a reference backend, it is not aimed at the best performance but only the correct results. The DP backend uses HDF5 to store model serialization data, which is backend-independent.
The DP backend and the serialization data are used in the unit test to ensure different backends have consistent results and can be converted between each other.
In the current version, the DP backend has a similar supporting status to the PyTorch backend, while DPA-1 and DPA-2 are not supported yet.
Authors
The above highlights were mainly contributed by
- Hangrui Bi (@20171130), in #3180
- Chun Cai (@caic99), in #3180
- Junhan Chang (@TablewareBox), in #3180
- Yiming Du (@nahso), in #3180
- Guolin Ke (@guolinke), in #3180
- Xinzijian Liu (@zjgemi), in #3180
- Anyang Peng (@anyangml), in #3362, #3192, #3212, #3210, #3248, #3266, #3281, #3296, #3309, #3314, #3321, #3327, #3338, #3351, #3376, #3385
- Xuejian Qin (@qin2xue3jian4), in #3180
- Han Wang (@wanghan-iapcm), in #3188, #3190, #3208, #3184, #3199, #3202, #3219, #3225, #3232, #3235, #3234, #3241, #3240, #3246, #3260, #3274, #3268, #3279, #3280, #3282, #3295, #3289, #3340, #3352, #3357, #3389, #3391, #3400
- Jinzhe Zeng (@njzjz), in #3171, #3173, #3174, #3179, #3193, #3200, #3204, #3205, #3333, #3360, #3364, #3365, #3169, #3164, #3175, #3176, #3187, #3186, #3191, #3195, #3194, #3196, #3198, #3201, #3207, #3226, #3222, #3220, #3229, #3226, #3239, #3228, #3244, #3243, #3213, #3249, #3250, #3254, #3247, #3253, #3271, #3263, #3258, #3276, #3285, #3286, #3292, #3294, #3293, #3303, #3304, #3308, #3307, #3306, #3316, #3315, #3318, #3323, #3325, #3332, #3331, #3330, #3339, #3335, #3346, #3349, #3350, #3310, #3356, #3361, #3342, #3348, #3358, #3366, #3374, #3370, #3373, #3377, #3382, #3383, #3384, #3386, #3390, #3395, #3394, #3396, #3397
- Chengqian Zhang (@Chengqian-Zhang), in #3180
- Duo Zhang (@iProzd), in #3180, #3203, #3245, #3261, #3262, #3355, #3367, #3359, #3371, #3387, #3388, #3380, #3378
- Xiangyu Zhang (@CaRoLZhangxy), in #3162, #3287, #3337, #3375, #3379
Breaking changes
- Python 3.7 support is dropped. by @njzjz in #3185
- We require all model files to have the correct filename extension for all interfaces so a corresponding backend can load them. TensorFlow model files must end with
.pb
extension. - Python class
DeepTensor
(includingDeepDiople
andDeepPolar
) now returns atomic tensor in the dimension ofnatoms
instead ofnsel_atoms
. by @njzjz in #3390 - For developers: the Python module structure is fully refactored. The old
deepmd
module was moved todeepmd.tf
without other API changes, anddeepmd_utils
was moved todeepmd
without other API changes. by @njzjz in #3177, #3178
Other changes
Enhancement
- Neighbor stat for the TensorFlow backend is 80x accelerated. by @njzjz in #3275
- i-PI: remove normalize_coord by @njzjz in #3257
- LAMMPS: fix_dplr.cpp delete redundant setup and set atom->image when pre_force by @shiruosong in #3344, #3345
- Bump scikit-build-core to 0.8 by @njzjz in #3369
- Bump LAMMPS to stable_2Aug2023_update3 by @njzjz in #3399
- Add fparam/aparam support for fine-tune by @njzjz in #3313
- TF: remove freeze warning for optional nodes by @njzjz in #3381
CI/CD
- Build macos-arm64 wheel on M1 runners by @njzjz in #3206
- Other improvements and fixes to GitHub Actions by @njzjz in #3238, #3283, #3284, #3288, #3290, #3326
- Enable docstring code format by @njzjz in #3267
Bugfix
- Fix TF 2.16 compatibility by @njzjz in #3343
- Detect version in advance before building deepmd-kit-cu11 by @njzjz in #3172
- C API: change the required shape of electric field to nloc * 3 by @njzjz in #3237
New Contributors
- @anyangml made their first contribution in #3192
- @shiruosong made their first contribution in #3344
Full Changelog: https://github.com/deepmodeling/de...
v2.2.9
v2.2.8
What's Changed
Breaking Changes
New Features
- build neighbor list with external Python program by @njzjz in #3046
- nvnmd: init-model feature and 256 neighbors by @LiuGroupHNU in #3058
- Add pairwise tabulation as an independent model by @njzjz in #3101
Enhancement
- support compressing gelu_tf by @njzjz in #2957
- respect user defined CUDAARCHS by @njzjz in #2979
- lmp: refactor ixnode by @njzjz in #2971
- print system prob using scientific natation by @njzjz in #3008
- remove unused codes in se_a.py by @nahso in #3049
- print NaN loss when labeled data is not found by @njzjz in #3047
Documentation
- docs: add theory from v2 paper by @njzjz in #2715
- docs: configuring automatically generated release notes by @njzjz in #2975
- docs: use relative links by @njzjz in #2976
- docs: remove lammps.md by @njzjz in #2986
- docs: document horovod on Conda-Forge by @njzjz in #3001
- docs: document external neighbor list by @njzjz in #3056
- docs: update documentation for pre-compiled C library by @njzjz in #3083
- docs: update Amber interface by @njzjz in #3074
- docs: document CP2K interface by @njzjz in #3158
Build and release
- bump scikit-build-core to 0.6 by @njzjz in #2981
- bump CUDA version to 12.2 for pre-built packages by @njzjz in #2960
- add cu11 prebuilt packages by @njzjz in #3002
- bump scikit-build-core to 0.7 by @njzjz in #3038
- bump LAMMPS to stable_2Aug2023_update2 by @njzjz in #3066
Bug fixings
- fix SpecifierSet behavior with prereleases by @njzjz in #2959
- fix restarting from compressed training with type embedding by @njzjz in #2996
- Add the missing initializations for extra embedding variables by @nahso in #3005
- Fix macro issue with multiple arguments by @njzjz in #3016
- fix se_a_ebd_v2 when nloc != nall by @njzjz in #3037
- fix: invalid read and write when natom grows by @Cloudac7 in #3031
- fix GPU mapping error for Horovod + finetune by @njzjz in #3048
- lmp: Register styles when using CMake by @njzjz in #3097
- fix segfault in ~Region by @njzjz in #3108
- lmp: fix evflag initialization by @njzjz in #3133
- cmake: fix setting
CMAKE_HIP_FLAGS
by @njzjz in #3155 - Fix max nbor size related issues by @denghuilu in #3157
- Fix possible memory leak in constructors by @njzjz in #3062
- fix memory leaks related to
char*
by @njzjz in #3063 - Update the path to training and validation data dir in zinc_se_a_mask.json by @dingye18 in #3068
- Fix catching by value by @njzjz in #3077
- resolve "Multiplication result converted to larger type" by @njzjz in #3149
- resolve "Multiplication result converted to larger type" by @njzjz in #3159
CI/CD
- move to ruff formatter by @njzjz in #2951
- add unit tests for LAMMPS fparam/aparam keywords by @njzjz in #2998
- fix labeler.yml with actions/labeler v5 by @njzjz in #3059
- add CodeQL checks by @njzjz in #3075
Code refactor and enhancement to prepare for upcoming v3
- rename
deepmd_cli
todeepmd_utils
by @njzjz in #2983 - merge prob_sys_size with prob_sys_size;0:nsys:1.0 by @CaRoLZhangxy in #2963
- add utils for DP native model format by @njzjz in #3064
- rm rcut from DeepmdDataSystem by @wanghan-iapcm in #3106
- add activation_function and resnet arguments and NumPy implementation to NativeLayer by @njzjz in #3109
- NativeLayer: support None bias. by @wanghan-iapcm in #3111
- fix native layer concat bug. by @wanghan-iapcm in #3112
- model format for the embedding net by @wanghan-iapcm in #3113
- support numerical precision and env_mat by @wanghan-iapcm in #3114
- Add dp model format sea by @wanghan-iapcm in #3123
- input order of env_mat changed to be consistent with descriptor by @wanghan-iapcm in #3125
- doc string for dp model format descriptor se_e2_a by @wanghan-iapcm in #3124
- add native Networks for mutiple Network classes by @njzjz in #3117
- add definition for the output of fitting and model by @wanghan-iapcm in #3128
- cc: refactor DeepPotModelDevi, making it framework-independent by @njzjz in #3134
- fix: model check assumes call as the forward method by @wanghan-iapcm in #3136
- support fitting net by @wanghan-iapcm in #3137
- refactorize NativeLayer, interface does not rely on the platform by @wanghan-iapcm in #3138
- refactorize networks, now can be used cross platform by @wanghan-iapcm in #3141
- move utility to
deepmd_utils
(without modifaction) by @njzjz in #3140 - add cross-platform AutoBatchSize by @njzjz in #3143
- move deepmd.entrypoints.{doc,gui} to deepmd_utils.entrypoints.{doc,gui} by @njzjz in #3144
- cc: refactor DeepPot to support multiple backends by @njzjz in #3142
- cc: refactor DeepTensor for multiple-backend framework by @njzjz in #3151
- cc: refactor DataModifier for multiple-backend framework by @njzjz in #3148
- fix: some issue of the output def by @wanghan-iapcm in #3152
- cc: merge
DeepPotBase
andDeepTensor
member functions by @njzjz in #3145 - move
OutOfMemoryError
fromdeepmd
todeepmd_utils
by @njzjz in #3153 - set dpgui entry point to
deepmd_utils
by @njzjz in #3161
New Contributors
Full Changelog: v2.2.7...v2.2.8
v2.2.7
Caution
Known critical issues in this version
- Incorrect results on GPUs.
We suggest all users use a newer version. See #2866 for more information.
New features
- add
aparam_from_compute
topair deepmd
by @ChiahsinChu in #2929 - support compressing any neuron structure by @njzjz in #2933
- Support conversion to pbtxt in command line interface by @Yi-FanLi in #2943
Enhancement
- argcheck: restrict the type of elements in a list by @njzjz in #2945
- reformat func for further merging with pt version by @zxysbsbzxy in #2946
Build and release
Bug fix
- fix py lmp plugin path for editable installation by @njzjz in #2922
- fix se_a compression for just enough sel and symmetrical coordinates by @njzjz in #2924
- fix floating point exception when nloc or nall is zero by @njzjz in #2923
- fix typo about fparam/aparam by @ChiahsinChu in #2925
- Fix typos by @HydrogenSulfate in #2930
- only freeze in rank 0 by @njzjz in #2937
- fix ase tarball url and testing C library by @njzjz in #2950
New Contributors
- @HydrogenSulfate made their first contribution in #2930
- @zxysbsbzxy made their first contribution in #2946
Full Changelog: v2.2.6...v2.2.7
v2.2.6
Caution
Known critical issues in this version
- Incorrect results on GPUs.
We suggest all users use a newer version. See #2866 for more information.
New features
- apply compression for se_e2_a_tebd by @nahso in #2841
- cmake: support LAMMPS in built-in mode; remove kspace requirement by @njzjz in #2891
- support neighbor stat on GPUs by @njzjz in #2897
- Add
dpgui
entry point anddp gui
CLI by @njzjz in #2904
Enhancement
- forward GPU error message by @njzjz in #2878
- Generate CUDA stubs dynamically by @njzjz in #2884 and #2900
- refactor update_sel by @njzjz in #2901
- support combining frozen models into a pairwise DPRc model by @njzjz in #2902
Bugfixes
se_atten
andse_atten_v2
- nvnmd: update doc and fix bug in map_flt_nvnmd.cc by @LiuGroupHNU in #2831
- cmake: skip executing python when cross compiling by @njzjz in #2876
- set GPU binding in DeepTensor and DataModifier by @Yi-FanLi in #2886
- fix LAMMPS wheel with CUDA wheels by @njzjz in #2887
- fix TypeError when type_map is not given by @njzjz in #2890
- fix "expression result unused" warnings by @njzjz in #2910
CI/CD
- fix cuda installation for building wheels by @njzjz in #2879
- fix source distribution version in build-wheel.yml by @njzjz in #2883
- run Test CUDA in container by @njzjz in #2892
- fix a typo in tool.cibuildwheel.linux.environment by @njzjz in #2896
Documentation
- docs: update DPRc examples to make it compressible by @njzjz in #2874
- docs: add easy install development version by @njzjz in #2880
- docs: replace relative URLs in PyPI documentation by @njzjz in #2885
- docs:
mpirun --version
to get MPI version by @njzjz in #2915
Full Changelog: v2.2.5...v2.2.6
v2.2.5
Caution
Known critical issues in this version
- Incorrect results on GPUs.
se_atten_v2
gives inconsistent energy and forces.
We suggest all users use a newer version. See #2866 for more information.
New features
- lmp: support
unit real
by @njzjz and @Yi-FanLi in #2775 #2790 #2800 - add linear models that are linear combination of DP models by @njzjz in #2781
- support atomic/relative model deviation in CLI by @njzjz in #2801
- make pairwise_dprc model work with MPI by @njzjz in #2818
Merge cuda and rocm code
Enhancement
- lmp: throw error for traditional installation if dependent packages are not installed by @njzjz in #2777
- lmp: add the header for atomic model deviation by @njzjz in #2778
- check status of allocate_temp by @njzjz in #2782
- do not sort atoms in dp test by @njzjz in #2794
- lmp:
fix_dplr
use the sametype_map
frompair_deepmd
by @njzjz in #2776 - check status of allocate_temp by @njzjz in #2797
- fix np.loadtxt DeprecationWarning by @njzjz in #2802
ndarray.tostring
->ndarray.tobytes
by @njzjz in #2814tf.accumulate_n
->tf.add_n
by @njzjz in #2815tf.test.TestCase.test_session
->tf.test.TestCase.cached_session
by @njzjz in #2816- make the pairwise DPRc model 2x faster by @njzjz in #2833
- prod_env_mat: allocate GPU memory out of frame loop by @njzjz in #2832
- refactor model version convert by @njzjz in #2854
- bump LAMMPS version to stable_2Aug2023_update1 by @njzjz in #2859
Documentation
- docs: improve checkpoint description by @njzjz in #2784
- fix grammatical errors by @Yi-FanLi in #2796
- docs: add doc to install cmake by @njzjz in #2805
- docs: add docs for addtional CMake arguments via pip by @njzjz in #2806
- add citation for fparam by @njzjz in #2821
- add citation for
aparam
by @mingzhong15 in #2825 - docs: rewrite coding conventions by @njzjz in #2855
Build and release
- migrate Python build backend to scikit-build-core by @njzjz in #2798
- drop old GCC versions in test by @njzjz in #2812
- speed up GitHub Actions by @njzjz in #2822
- improve configurations of Python lint tools by @njzjz in #2823
- fix CTest by @njzjz in #2828
- add tox configutation by @njzjz in #2829
- use parse_version from packaging.version instead of pkg_resources by @njzjz in #2830
- build linux-aarch64 wheel on self-hosted runner by @njzjz in #2851
- add test cuda workflow by @njzjz in #2848
- cmake: use pip to install tensorflow by @njzjz in #2858
- cmake: use modern
HIP
language by @njzjz in #2857 - download cub using CMake FetchContent by @njzjz in #2870
Bug fixing
- fix dp test atomic polar; add UTs for dp test by @njzjz in #2785
- ignore drdq when generalized force loss is not set by @njzjz in #2807
- lmp: let fparam_do_compute not execute by default by @Yi-FanLi in #2819
- Fix invalid escape sequence by @njzjz in #2820
- fix missing version file with setuptools-scm v8 by @njzjz in #2850
- fix compatibility with NumPy 1.26 by @njzjz in #2853
- fix finetune RMSE and memory issue by @njzjz in #2860
- fix the issue of applying modifier multiple times when batch set is load only once by @wanghan-iapcm in #2864
Full Changelog: v2.2.4...v2.2.5
v2.2.4
Caution
Known critical issues in this version
- Incorrect results from DPLR training.
se_atten_v2
gives inconsistent energy and forces.
See #2866 for more information.
Breaking changes
New features
- support mapping to ghost type by @link89 in #2732
- Added atomic dipole to test.py by @hanao2 in #2747
- feat: calculate the real error in dp model-devi by @njzjz in #2757
- feat: add se_atten_v2 descriptor by @iProzd in #2755
Enhancement
Bug fixings
- fix documentation url in pyproject.toml by @njzjz in #2742
- fix bug in deepmd.infer.deep_pot.DeepPot by @ChiahsinChu in #2731
- Use
module.__path__[0]
instead ofmodule.__file__
by @njzjz in #2769
New Contributors
Full Changelog: v2.2.3...v2.2.4
v2.2.3
Caution
Known critical issues in this version
- Incorrect results from DPLR training.
- Incorrect results when compressed training se_atten model.
See #2866 for more information.
Breaking changes
- breaking(lmp): fix definition of cvatom by @njzjz in #2678
- breaking: change the default value of
rcond
from1e-3
toNone
by @njzjz in #2688 - breaking: add energy bias to tab potential by @njzjz in #2670
New features
- Support minimization in dplr by @Yi-FanLi in #2584
- prod_force_grad: support multiple frames in parallel by @njzjz in #2601
- prod_force: support multiple frames in parallel by @njzjz in #2600
- Enable model compression for se_atten by @nahso in #2532
- Fix DPLR: Support time-dependent efield by @Yi-FanLi in #2625
- support fparam/aparam in dp model-devi by @njzjz in #2665
- add pairwise DPRc by @njzjz in #2682
- nvnmd-v1 with 31-type chemical species by @LiuGroupHNU in #2676
- support generalized force loss by @njzjz in #2690
- add args decorator for fitting and loss by @ChiahsinChu in #2710
Enhancement
- refactor: uncouple Descriptor and Fitting from Trainer by @njzjz in #2549
- ProdEnvMatAMixOp: move filter_ftype out of nsamples loop by @njzjz in #2604
- set specific mesh shapes for mixed type by @njzjz in #2481
- add SPDX ID to each file by @njzjz in #2639
- insert license to C++ header files by @njzjz in #2652
- Enhance the precision in the data format conversion tool raw_to_set.sh by @Vibsteamer in #2654
- improve CLI performance by @njzjz in #2696
- raise error if both v1 and v2 parameters are given by @njzjz in #2714
- symlink
model.ckpt.*
to relative paths by @njzjz in #2720
Documentation
- docs: add nodejs to toc by @njzjz in #2562
- docs: fix a typo in cxx.md by @njzjz in #2578
- improve docs and scripts to install libtensorflow_cc 2.12 by @njzjz in #2571
- docs: change
set-rpath
toadd-rpath
by @njzjz in #2587 - docs: clarify batch_size when MPI is used by @njzjz in #2585
- Se atten examples by @wanghan-iapcm in #2633
- Add zbl example by @Chengqian-Zhang in #2613
- docs: fix a typo in README TOC by @njzjz in #2651
- add precision arguments explicitly to examples by @njzjz in #2659
- docs: add the link to the compiler that TF uses by @njzjz in #2675
- update citation information by @njzjz in #2711
Build and release
- remove unnecessary files from pypi source distribution by @njzjz in #2565
- fix deepspin.pbtxt by @hztttt in #2566
- reduce model size for dplr unittest by @Yi-FanLi in #2561
- Add unittest for dp_ipi by @njzjz in #2574
- Reduce dp mask pb size and fix bug in dim_fparam/dim_aparam fetching by @dingye18 in #2588
- fix large files checking by @njzjz in #2564
- apply the C4 rule (flake8-comprehensions) by @njzjz in #2610
- build macOS arm64 wheels by @njzjz in #2616
- fix uploading C++ coverage for test_python workflow by @njzjz in #2622
- Insert braces after control statements in C++ by @njzjz in #2629
- cmake: migrate from
FindCUDA
to CUDA language by @njzjz in #2634 - set cmake_minimum_required for CUDA/ROCm by @njzjz in #2695
- report code coverage for cli by @njzjz in #2719
- bump lammps to stable_2Aug2023 by @njzjz in #2717
Bug fixings
- cmake: fix a typo in nodejs cmake file by @njzjz in #2563
- fix dplr: correct type check in get_valid_pairs by @Yi-FanLi in #2580
- fix_dplr: make pppm_dplr optional by @Yi-FanLi in #2581
- fix the missing modifier issue of dp compress by @Yi-FanLi in #2591
- Reduce dp mask pb size and fix bug in dim_fparam/dim_aparam fetching by @dingye18 in #2588
- import deepmd.op in infer.data_modifier by @Yi-FanLi in #2592
- fix memory leaking in test_env_mat_a_mix.cc by @njzjz in #2596
- pass ntypes to sub descriptors in the hybrid descriptor by @njzjz in #2611
- fix se_atten variable names when suffix is given by @njzjz in #2631
- fix hybrid compute stat when using mixed_type by @iProzd in #2614
- fix se_atten compression when suffix is given by @njzjz in #2635
- docs: fix the link of DOI badge by @njzjz in #2643
- synchronize in the beginning of all CUDA functions by @njzjz in #2661
- fix: sort aparam in the Python API by @njzjz in #2666
- fix: sort aparam in the C++ API by @njzjz in #2667
- fix se_atten tabulate when
exclude_types
is given by @njzjz in #2679 - fix TestDeepPotAPBCExcludeTypes by @njzjz in #2680
- make only the local GPU visible by @njzjz in #2716
New Contributors
- @nahso made their first contribution in #2532
- @Chengqian-Zhang made their first contribution in #2613
- @Vibsteamer made their first contribution in #2654
Full Changelog: v2.2.2...v2.2.3
v2.2.2
Caution
Known critical issues in this version
- Incorrect results from DPLR training.
See #2866 for more information.
New features
- Support different learning rate settings for each fitting net in multi-task mode by @HuangJiameng in #2348
- support the DOS (electronic density of states) fitting by @mingzhong15 in #2449
- add sub fields of hybrid descriptor by @njzjz in #2484
- Deep spin new by @hztttt in #2304
- add Node.js interface by @njzjz in #2524
- prefetch data during training by @njzjz in #2534
- make Fittings pluginable by @njzjz in #2541
C and header only C++
- C API: support fparam and aparam for DeepPot by @njzjz in #2415
- add read_file_to_string to C API by @njzjz in #2412
- C: support fparam/aparam for DP model devi by @njzjz in #2486
- C: add select_by_type and select_map by @njzjz in #2491
- hpp: add compute_avg, compute_std, etc by @njzjz in #2493
- migrate from C API to hpp API by @njzjz in #2506
- allow building lmp/gmx from pre-compiled C library by @njzjz in #2514
- c: pass errors for read_file_to_string by @njzjz in #2547
Build and release
- bump to TF 2.12 by @njzjz in #2422
- support xla for CUDA pip packages by @njzjz in #2427
- disable Findtensorflow caches for skbuild by @njzjz in #2464
- using trusted publishing in upload_pypi by @njzjz in #2496
- remove _GLIBCXX_USE_CXX11_ABI macro for libraries linking against the C library by @njzjz in #2527
- use pypi lammps to test lammps plugin by @njzjz in #2548
- test lmp for linux wheel by @njzjz in #2550
- update package classifiers by @njzjz in #2558
- bump lammps to stable_23Jun2022_update4 by @njzjz in #2495
- Bump docker/login-action from 1.10.0 to 2.1.0 by @dependabot in #2411
- Bump pypa/cibuildwheel from 2.12.1 to 2.12.3 by @dependabot in #2478
- Bump docker/metadata-action from 4.3.0 to 4.4.0 by @dependabot in #2477
Enhancements
- Docs: Fix typo in parallel-training.md by @caic99 in #2438
- docs: add links to documentation in LAMMPS input by @njzjz in #2453
- Create DeePMD-kit_Quick_Start_Tutorial_EN.ipynb by @Q-Query in #2459
- use
error->one
forget_file_content
by @njzjz in #2473 - improve citation information by @njzjz in #2474
- lmp: extract deepmd version information to a seperated file by @njzjz in #2480
- docs: fix the link to the bib file by @njzjz in #2485
- add tests for dos training example by @mingzhong15 in #2488
- lmp: remove codes to calculate energy deviation by @njzjz in #2492
- Add training_data key in zinc_se_a_mask.json by @dingye18 in #2489
- clean unused methods in C++ API by @njzjz in #2498
- print model deviation of total energy per atom in
dp model_devi
by @njzjz in #2501 - raise a clear message when no set is found in a system by @njzjz in #2503
- catch tf.errors.CancelledError for OOM by @njzjz in #2504
- lmp: add tests for compute deeptensor/atom by @njzjz in #2507
- improve messages for model compatability
by @njzjz in #2518 - lmp/ipi: remove float precision by @njzjz in #2519
- remove warnings of batch size for mixed systems training by @njzjz in #2470
- remove unmaintained dp config by @njzjz in #2540
- docs: add train-energy-spin and train-fitting-dos to toctree by @njzjz in #2546
- Dplr unittest by @Yi-FanLi in #2545
Bug fixings
- fix typo by @kmu in #2404
- fix lmp_version.sh by @njzjz in #2417
source
a relative path by @njzjz in #2420- set mixed_type to True for mixed systems by @njzjz in #2428
- keep the file permission when
make lammps
by @njzjz in #2414 - nvnmd: fix some warnings about matmul_flt_nvnmd by @LiuGroupHNU in #2430
- fix: avoid
using namespace std;
in header files by @e-kwsm in #2437 - fix C API documentation by @njzjz in #2424
- fix: descriptor function doc by @AnuragKr in #2440
- Fix dplr error by @Yi-FanLi in #2436
- fix the header of "lr" by @njzjz in #2462
- fix nopbc in finetune, DeepTensor test, and DipoleChargeModifier by @njzjz in #2461
- fix build_type_exclude_mask when nloc != nall by @njzjz in #2505
- fix dtype in PairTabOp by @njzjz in #2500
- lmp: forward errors to error->one instead of error->all by @njzjz in #2539
- fix se_e3 tabulate op by @njzjz in #2552
- Fix model-devi with mixed_type format by @iProzd in #2433
New Contributors
- @pre-commit-ci made their first contribution in #2416
- @e-kwsm made their first contribution in #2437
- @Q-Query made their first contribution in #2459
- @hztttt made their first contribution in #2304
Full Changelog: v2.2.1...v2.2.2
v2.2.1
Caution
Known critical issues in this version
- Incorrect results from se_e3 compressed model.
- Incorrect results from DPLR training.
See #2866 for more information.
New features
Enhancement
CICD
- Bump actions/checkout from 2 to 3 by @dependabot in #2381
- Bump pypa/cibuildwheel from 2.11.3 to 2.12.1 by @dependabot in #2382
- Bump docker/metadata-action from 3.3.0 to 4.3.0 by @dependabot in #2383
- Bump actions/upload-artifact from 2 to 3 by @dependabot in #2384
- Bump docker/build-push-action from 2.5.0 to 4.0.0 by @dependabot in #2385
- fix the version of pypa/gh-action-pypi-publish by @njzjz in #2389
Bug fixings
- atype_filter should have the shape of nloc by @njzjz in #2390
- Fix incompatibility after fixing the incontinuity of se_atten by @iProzd in #2397
- fix pdf docs by @njzjz in #2401
- clean old handlers before adding new one by @njzjz in #2400
New Contributors
- @dependabot made their first contribution in #2381
Full Changelog: v2.2.0...v2.2.1