-
Notifications
You must be signed in to change notification settings - Fork 528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store energy bias with interface precision #2174
Conversation
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@wanghan-iapcm @denghuilu @iProzd Discussion is welcome. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I see, apart from the precision of the bias_atom_e, the implement by this PR should be equivalent to the old one. Am I correct?
This PR brings the implementation for the type embedding in #1592 and #1866 to other cases. Before, the bias is stored in the bias of the last layer in the fitting network. This PR creates a new constant variable (not trainable) for it, as we did for the type embedding in #1592 and #1866. |
I want to confirm for the standard cases (like se_a) the PR gives equivalent result as the old implementation. |
cd examples/water/se_e2_a
dp train input.json
This PR:
The outputs are the same. |
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
This PR moves energy biases out of NN for all situations and stores them with interface precision. When using the FP64 precision interface and FP32 precision NN, this patch can improve the accuracy of the atomic energy when it has a large absolute value. For example, when the atomic energy is 11451.41234567 eV, the FP32 value is 11451.412 eV (places=3); but if we have an FP64 bias of 11450.000000 eV, the NN only needs to fit 1.41234567 eV, and the FP32 value is 1.4123456 eV (places=7). Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
This PR moves energy biases out of NN for all situations and stores them with interface precision.
When using the FP64 precision interface and FP32 precision NN, this patch can improve the accuracy of the atomic energy when it has a large absolute value. For example, when the atomic energy is 11451.41234567 eV, the FP32 value is 11451.412 eV (places=3); but if we have an FP64 bias of 11450.000000 eV, the NN only needs to fit 1.41234567 eV, and the FP32 value is 1.4123456 eV (places=7).