Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support exclude_types for se_atten #2315

Merged
merged 2 commits into from
Feb 14, 2023

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented Feb 13, 2023

Fix #2232.

Also, clean old useless codes for exclude_types.

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@codecov-commenter
Copy link

Codecov Report

Base: 74.80% // Head: 27.45% // Decreases project coverage by -47.35% ⚠️

Coverage data is based on head (0a69115) compared to base (d16231f).
Patch coverage: 5.55% of modified lines in pull request are covered.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

Additional details and impacted files
@@             Coverage Diff             @@
##            devel    #2315       +/-   ##
===========================================
- Coverage   74.80%   27.45%   -47.35%     
===========================================
  Files         217      215        -2     
  Lines       21644    19877     -1767     
  Branches     1586     1321      -265     
===========================================
- Hits        16191     5458    -10733     
- Misses       4430    13870     +9440     
+ Partials     1023      549      -474     
Impacted Files Coverage Δ
deepmd/descriptor/se_atten.py 9.77% <5.55%> (-83.48%) ⬇️
source/op/soft_min_virial_grad.cc 3.70% <0.00%> (-94.87%) ⬇️
source/op/soft_min_virial.cc 4.00% <0.00%> (-94.49%) ⬇️
source/op/prod_virial_grad_multi_device.cc 1.61% <0.00%> (-94.28%) ⬇️
source/op/soft_min_force_grad.cc 4.16% <0.00%> (-94.23%) ⬇️
source/op/soft_min.cc 3.33% <0.00%> (-94.11%) ⬇️
source/op/prod_force_grad_multi_device.cc 1.81% <0.00%> (-93.57%) ⬇️
source/op/soft_min_force.cc 5.26% <0.00%> (-92.78%) ⬇️
source/op/pair_tab.cc 2.24% <0.00%> (-92.25%) ⬇️
deepmd/nvnmd/entrypoints/wrap.py 7.77% <0.00%> (-91.22%) ⬇️
... and 147 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@njzjz njzjz marked this pull request as draft February 13, 2023 06:15
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@njzjz njzjz marked this pull request as ready for review February 13, 2023 06:50
@wanghan-iapcm wanghan-iapcm merged commit 9b4733d into deepmodeling:devel Feb 14, 2023
dingye18 added a commit to dingye18/deepmd-kit that referenced this pull request Feb 17, 2023
* print MAE for `dp test` (deepmodeling#2310)

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix bug deepmodeling#2311 when using FP32 in se_atten (deepmodeling#2312)

Fix bug deepmodeling#2311 when using FP32 in se_atten

* support multiple frames DeepPot in C/hpp API (deepmodeling#2309)

* Add DeepPot C API v2 to support multiple frames. Also, preserve the
arguments for fparam and aparam for future use (planned in deepmodeling#2236). V1 is
kept for API and ABI compatibility.
* Support multiple frames for DeepPot hpp API. The API compatibility is
kept. Add tests for new situation.

---------

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>

* Implementation of se_a_mask op (deepmodeling#2313)

* Implement se_a_mask op. Training and inference (c++ interface) are
tested. Unit test files are required.

* Fix bug in atom pref setting for forces.

* Add the unit test for descrpt se_a_mask

* Remove the unit test for dp mask model

* Update dp mask system path for unit test

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* support exclude_types for se_atten (deepmodeling#2315)

Fix deepmodeling#2232.

Also, clean old useless codes for `exclude_types`.

---------

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>

* fix restarting from the original model (deepmodeling#2317)

deepmodeling#2253 supports restarting from the compressed model but breaks the
original model. This PR fixes the error by detecting the model type
first.

```
FAILED_PRECONDITION: Attempting to use uninitialized value
```

---------

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>

* package dp_ipi in the PyPI wheels (deepmodeling#2320)

Fix deepmodeling#2282.

In addition, change the return code of `dp_ipi` (without any arguments)
from 1 to 0.

---------

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* build Docker images for each push (deepmodeling#2324)

and push the image to the GitHub container.

Also, provide the pypi CUDA support for pypi LAMMPS.

---------

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>

* Fix typo in install docs (deepmodeling#2325)

Signed-off-by: Chun Cai <amoycaic@gmail.com>

* Add a script to detect model version (deepmodeling#2318)

This PR adds a script in deepmd/utils to detect the version of a Deep
Potential model.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

---------

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Chun Cai <amoycaic@gmail.com>
Co-authored-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Duo <50307526+iProzd@users.noreply.github.com>
Co-authored-by: Chun Cai <amoycaic@gmail.com>
Co-authored-by: Yifan Li李一帆 <yifanl0716@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Parameters] model/descriptor[se_atten]/exclude_types should be supported
4 participants