Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support MPI and other atom_styles for LAMMPS atomic keyword #628

Merged
merged 3 commits into from
May 14, 2021

Conversation

njzjz
Copy link
Member

@njzjz njzjz commented May 14, 2021

fix problems left in #44

@njzjz
Copy link
Member Author

njzjz commented May 14, 2021

I tried to run in serial and in parallel, and the outputs are the same.

@codecov-commenter
Copy link

codecov-commenter commented May 14, 2021

Codecov Report

Merging #628 (32cfe99) into devel (c93a084) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##            devel     #628   +/-   ##
=======================================
  Coverage   74.37%   74.37%           
=======================================
  Files          81       81           
  Lines        6399     6399           
=======================================
  Hits         4759     4759           
  Misses       1640     1640           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c93a084...32cfe99. Read the comment docs.

@amcadmus amcadmus self-requested a review May 14, 2021 07:33
Comment on lines 610 to 635
if (out_each == 1){
vector<double> std_f_all(all_nlocal);
// Gather std_f and tags
tagint *tag = atom->tag;
int nprocs = comm->nprocs;
for (int ii = 0; ii < nlocal; ii++) {
tagsend[ii] = tag[ii];
stdfsend[ii] = std_f[ii];
}
MPI_Gather(&nlocal, 1, MPI_INT, counts, 1, MPI_INT, 0, world);
displacements[0] = 0;
for (int ii = 0; ii < nprocs-1; ii++) displacements[ii+1] = displacements[ii] + counts[ii];
MPI_Gatherv(tagsend, nlocal, MPI_LMP_TAGINT,
tagrecv, counts, displacements, MPI_LMP_TAGINT, 0, world);
MPI_Gatherv(stdfsend, nlocal, MPI_DOUBLE,
stdfrecv, counts, displacements, MPI_DOUBLE, 0, world);
if (rank == 0) {
for (int dd = 0; dd < all_nlocal; ++dd) {
std_f_all[tagrecv[dd]-1] = stdfrecv[dd];
}
for (int dd = 0; dd < all_nlocal; ++dd) {
fp << " " << setw(18) << std_f_all[dd];
}
}
}
if (rank == 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you plz indent this piece of code so it is compatible with the rest part?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@amcadmus amcadmus merged commit 6bba7fc into deepmodeling:devel May 14, 2021
@njzjz njzjz deleted the lmp2 branch May 14, 2021 08:47
This was referenced Jun 10, 2021
denghuilu added a commit to denghuilu/deepmd-kit that referenced this pull request Jun 15, 2021
add Important hint for getting-started.md

Update argcheck.py

add Important hint for variables

Update getting-started.md

check validity of data systems. print help message

add Important hint at getting-start.md (deepmodeling#622)

* add Important hint for getting-started.md

* add hint for some parameters

* add Important hint for variables

* Update argcheck.py

* Update getting-started.md

add doc of type embedding (deepmodeling#625)

Optimized mkindex function in doc/conf.py and added two files in troubleshooting. (deepmodeling#619)

support MPI and other atom_styles for LAMMPS atomic keyword (deepmodeling#628)

* support MPI and other atom_styles for LAMMPS atomic keyword

fix problems left in #44

* move out_each codes together

* indent the code

fix spell mistake (deepmodeling#638)

Atention -> Attention

Readme and Examples for Tensor mode (deepmodeling#632)

* Complete modification of tensor training, support combination of system with global/local label, and support polar label normalization to speed up training. Examples and documentation not added yet

* modify dipole json to pass ut test

* change json file (second time) to pass ut test

* modify test_data_modifier_shuffle.py file to fit new args rule

* modify data_modifier_shuffle: from dipole.npy to atomic_dipole.npy

* modify the name of pref_weight to pref and pref_atomic_weight to pref_atomic, plus some implementation mentioned by Han Wang in May 7th's email

* fix a bug occuring in ut test

* fix args of polar_se_a.json, to pass ut test

* change args: from loss_type to type, so that the args will be the same as ener mode, and will not cause conflict

* add examples and readme for tensor fitting mode in May 14

* change readmd content of tensor fit

* change the file name of readme file of tensor fitting

* Update train-fitting-tensor.md

* Update train-fitting-tensor.md

* Update train-fitting-tensor.md

* change the explanation of why some of lcurve.out is 0

* Update train-fitting-tensor.md

* Update train-fitting-tensor.md

append to out_file when LAMMPS restarts (deepmodeling#640)

This ensures the out file will not be override when LAMMPS restarts.
This commit may be conflicted with deepmodeling#392. Commit
@5597ea2b49f96e99a52a9779b04b6c12e5a79a04 should be dropped.

add an example of C++ inference to doc (deepmodeling#652)

* add an example of C++ inference to doc

* fix broken link

Add instructions for i-PI (deepmodeling#660)

* Add instructions for i-PI

* Update doc/getting-started.md

Co-authored-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>

Co-authored-by: tuoping <abby@DESKTOP-LV5KL0D.localdomain>
Co-authored-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>

Added doc for netsize setting, num_nodes specification, and sel setting in doc/troubleshooting/ (deepmodeling#657)

fix issue 668 (deepmodeling#680)

* fix bug of issue 668
gzq942560379 pushed a commit to HPC-AI-Team/deepmd-kit that referenced this pull request Sep 1, 2021
…ling#628)

* support MPI and other atom_styles for LAMMPS atomic keyword

fix problems left in deepmodeling#44

* move out_each codes together

* indent the code
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants