Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix float precision problem of se_atten in line 217 (deepmodeling#3961)…
… (deepmodeling#3978) fix float precision problem of se_atten in line 217. fix the bug: the different energy between qnn and lammps <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Improved energy calculation methods for more accurate results in the `wrap` module. - Introduced new parameters for enhanced configurability in energy-related computations. - **Improvements** - Enhanced handling and processing of energy shift arrays for better performance and accuracy. - Updated array manipulation and calculation methods for various wrapping functionalities. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: LiuGroupHNU <liujie123@HNU> Co-authored-by: MoPinghui <mopinghui1020@gmail.com> Co-authored-by: Han Wang <92130845+wanghan-iapcm@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Pinghui Mo <pinghui_mo@outlook.com>
- Loading branch information