Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

breaking(pt/tf/dp): disable bias in type embedding #3958

Merged
merged 11 commits into from
Jul 11, 2024

Conversation

iProzd
Copy link
Collaborator

@iProzd iProzd commented Jul 9, 2024

This PR addresses an issue observed during training with DPA2 on complex datasets, such as mptraj. Specifically, the learning curves of energy from the 2024Q1-based branch and the devel branch show significant differences at the very beginning when setting tebd_dim = 256 (and thus descriptor dim_out = 128 + 256). The issue is illustrated in the following image:

Example Image

After removing the bias in the type embedding, which affects the standard deviation of the descriptor when tebd_dim is very large, the learning curve improves significantly:

Example Image

Notably, this behavior is not prominent when using a tebd_dim that is relatively smaller than the descriptor itself, such as when using DPA2 with tebd_dim = 8 or using DPA1.

The same issue exists in econf of type embedding, which will be solved in a separated PR.

NOTE
This PR disables bias in type embedding in all backends, which is a breaking change.

Summary by CodeRabbit

  • New Features

    • Introduced use_tebd_bias and bias parameters across various components to control the use of bias in type embeddings and networks.
  • Updates

    • Updated serialization and deserialization methods to include the new parameters and ensure version compatibility.

Copy link
Contributor

coderabbitai bot commented Jul 9, 2024

Walkthrough

Walkthrough

The updates introduce a new use_tebd_bias parameter across multiple files in the deepmd module. This optional parameter allows control over whether to implement bias in various embedding layers and networks. Modifications include changes in initialization, serialization, and deserialization methods to accommodate this new parameter, with version compatibility checks updated accordingly.

Changes

File Path Change Summary
deepmd/dpmodel/descriptor/dpa1.py Added use_tebd_bias parameter, updated serialization and compatibility
deepmd/dpmodel/descriptor/dpa2.py Added use_tebd_bias parameter, updated serialization and compatibility
deepmd/dpmodel/descriptor/se_atten_v2.py Added use_tebd_bias parameter, updated serialization and compatibility
deepmd/dpmodel/utils/network.py Added bias parameter to T_Network, updated serialization and compatibility
deepmd/pt/model/network/network.py Added use_tebd_bias parameter to TypeEmbedNetConsistent, updated serialization and compatibility
deepmd/tf/utils/network.py Added bias parameter to embedding_net, included conditional logic for bias

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant DescrptDPA1
    participant DescrptDPA2
    participant SEAttenV2
    participant T_Network
    participant TypeEmbedNetConsistent

    User ->> DescrptDPA1: Create instance (use_tebd_bias)
    DescrptDPA1 ->> DescrptDPA1: Initialize with use_tebd_bias
    DescrptDPA1 ->> DescrptDPA1: Serialize with use_tebd_bias

    User ->> DescrptDPA2: Create instance (use_tebd_bias)
    DescrptDPA2 ->> DescrptDPA2: Initialize with use_tebd_bias
    DescrptDPA2 ->> DescrptDPA2: Serialize with use_tebd_bias

    User ->> SEAttenV2: Create instance (use_tebd_bias)
    SEAttenV2 ->> SEAttenV2: Initialize with use_tebd_bias
    SEAttenV2 ->> SEAttenV2: Serialize with use_tebd_bias

    User ->> T_Network: Create instance (bias)
    T_Network ->> T_Network: Initialize with bias
    T_Network ->> T_Network: Serialize with bias

    User ->> TypeEmbedNetConsistent: Create instance (use_tebd_bias)
    TypeEmbedNetConsistent ->> TypeEmbedNetConsistent: Initialize with use_tebd_bias
    TypeEmbedNetConsistent ->> TypeEmbedNetConsistent: Serialize with use_tebd_bias
Loading

Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 27434e7 and 983cd2c.

Files selected for processing (3)
  • deepmd/dpmodel/utils/network.py (7 hunks)
  • deepmd/pt/model/network/network.py (9 hunks)
  • deepmd/tf/utils/network.py (4 hunks)
Files skipped from review as they are similar to previous changes (3)
  • deepmd/dpmodel/utils/network.py
  • deepmd/pt/model/network/network.py
  • deepmd/tf/utils/network.py

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@@ -275,6 +281,7 @@
concat_output_tebd,
precision,
use_econf_tebd,
use_tebd_bias,

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_tebd_bias is not used.
@@ -178,6 +181,7 @@
precision,
add_tebd_to_repinit_out,
use_econf_tebd,
use_tebd_bias,

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_tebd_bias is not used.
@@ -206,6 +210,7 @@
precision,
add_tebd_to_repinit_out,
use_econf_tebd,
use_tebd_bias,

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_tebd_bias is not used.
@@ -234,6 +239,7 @@
precision,
add_tebd_to_repinit_out,
use_econf_tebd,
use_tebd_bias,

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_tebd_bias is not used.
@@ -298,6 +304,7 @@
precision,
add_tebd_to_repinit_out,
use_econf_tebd,
use_tebd_bias,

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_tebd_bias is not used.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range and nitpick comments (7)
deepmd/dpmodel/utils/network.py (1)

Line range hint 278-279:
Document the new bias parameter.

The docstring should be updated to include a description of the new bias parameter.

    mixed_prec
        The input dict which stores the mixed precision setting for the embedding net
+    bias : bool, Optional
+        Whether to use bias in the embedding layer.
deepmd/dpmodel/descriptor/dpa1.py (2)

204-205: Document the new parameter.

The new parameter use_tebd_bias should be documented in the docstring to explain its purpose and usage.


499-499: Update versioning information.

The version number should be updated to reflect the addition of the new use_tebd_bias parameter.

deepmd/pt/model/network/network.py (2)

576-576: Add docstring for the new parameter.

The use_tebd_bias parameter is missing from the docstring. Adding it will improve the documentation.

        """
        - Construct a type embedding net.
        + Construct a type embedding net.
        + 
        + Args:
        +   use_tebd_bias (bool): Whether to use bias in the type embedding layer.
        """

661-662: Add docstring for the new parameter.

The use_tebd_bias parameter is missing from the docstring. Adding it will improve the documentation.

        """
        - Whether to use bias in the type embedding layer.
        """
deepmd/tf/descriptor/se_atten.py (2)

2082-2083: Document the new use_tebd_bias parameter.

The docstring should include a description for the use_tebd_bias parameter.

    use_tebd_bias : bool, Optional
        Whether to use bias in the type embedding layer.

2121-2121: Document the new use_tebd_bias parameter.

The docstring should include a description for the use_tebd_bias parameter.

    use_tebd_bias : bool, Optional
        Whether to use bias in the type embedding layer.

deepmd/dpmodel/descriptor/dpa2.py Show resolved Hide resolved
Copy link

codecov bot commented Jul 9, 2024

Codecov Report

Attention: Patch coverage is 81.48148% with 10 lines in your changes missing coverage. Please review.

Project coverage is 82.82%. Comparing base (623ddc7) to head (983cd2c).
Report is 110 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/dpmodel/descriptor/dpa1.py 75.00% 1 Missing ⚠️
deepmd/dpmodel/descriptor/dpa2.py 75.00% 1 Missing ⚠️
deepmd/dpmodel/descriptor/se_atten_v2.py 66.66% 1 Missing ⚠️
deepmd/dpmodel/utils/type_embed.py 75.00% 1 Missing ⚠️
deepmd/pt/model/descriptor/dpa1.py 75.00% 1 Missing ⚠️
deepmd/pt/model/descriptor/dpa2.py 75.00% 1 Missing ⚠️
deepmd/pt/model/descriptor/se_atten_v2.py 66.66% 1 Missing ⚠️
deepmd/pt/model/network/network.py 80.00% 1 Missing ⚠️
deepmd/tf/descriptor/se_atten.py 75.00% 1 Missing ⚠️
deepmd/tf/utils/type_embed.py 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #3958      +/-   ##
==========================================
- Coverage   82.83%   82.82%   -0.01%     
==========================================
  Files         520      520              
  Lines       50832    50869      +37     
  Branches     3015     3015              
==========================================
+ Hits        42106    42133      +27     
- Misses       7790     7800      +10     
  Partials      936      936              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

deepmd/dpmodel/descriptor/se_atten_v2.py Outdated Show resolved Hide resolved
deepmd/utils/argcheck.py Show resolved Hide resolved
deepmd/dpmodel/utils/network.py Show resolved Hide resolved
deepmd/tf/utils/network.py Show resolved Hide resolved
@njzjz njzjz added the breaking change Breaking changes that should notify users. label Jul 9, 2024
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the comments from @njzjz, and it will LGTM when the comments are addressed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range, codebase verification and nitpick comments (3)
deepmd/dpmodel/utils/type_embed.py (1)

48-49: Document the new parameter use_tebd_bias.

The new parameter use_tebd_bias should be documented in the class docstring for better clarity.

    use_econf_tebd: bool, Optional
        Whether to use electronic configuration type embedding.
+   use_tebd_bias : bool, Optional
+       Whether to use bias in the type embedding layer.
    type_map: List[str], Optional
        A list of strings. Give the name to each type of atoms.
deepmd/tf/utils/type_embed.py (1)

103-104: Document the new parameter use_tebd_bias.

The new parameter use_tebd_bias should be documented in the class docstring for better clarity.

    use_econf_tebd: bool, Optional
        Whether to use electronic configuration type embedding.
+   use_tebd_bias : bool, Optional
+       Whether to use bias in the type embedding layer.
    type_map: List[str], Optional
        A list of strings. Give the name to each type of atoms.
deepmd/pt/model/descriptor/dpa2.py (1)

125-126: Document the new parameter use_tebd_bias.

The new parameter use_tebd_bias should be documented in the class docstring for better clarity.

    use_econf_tebd : bool, Optional
        Whether to use electronic configuration type embedding.
+   use_tebd_bias : bool, Optional
+       Whether to use bias in the type embedding layer.
    type_map : List[str], Optional
        A list of strings. Give the name to each type of atoms.

@iProzd iProzd requested review from njzjz and wanghan-iapcm July 10, 2024 14:19
@wanghan-iapcm wanghan-iapcm enabled auto-merge July 11, 2024 01:10
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Jul 11, 2024
Merged via the queue into deepmodeling:devel with commit 86f6e84 Jul 11, 2024
60 checks passed
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this pull request Sep 18, 2024
This PR addresses an issue observed during training with DPA2 on complex
datasets, such as `mptraj`. Specifically, the **learning curves of
energy** from the **2024Q1-based branch** and the **devel branch** show
significant differences at the very beginning when setting `tebd_dim` =
256 (and thus descriptor `dim_out` = 128 + 256). The issue is
illustrated in the following image:

<img
src="https://github.com/deepmodeling/deepmd-kit/assets/50307526/701835a4-126f-4a93-91c7-f9e685c4dc9d"
alt="Example Image" width="500">


After removing the bias in the type embedding, which affects the
standard deviation of the descriptor when `tebd_dim` is very large, the
learning curve improves significantly:

<img
src="https://github.com/deepmodeling/deepmd-kit/assets/50307526/8915e7dd-1813-42bc-8617-fe8209bc6da1"
alt="Example Image" width="500">

Notably, this behavior is not prominent when using a `tebd_dim` that is
relatively smaller than the descriptor itself, such as when using DPA2
with `tebd_dim` = 8 or using DPA1.

The same issue exists in econf of type embedding, which will be solved
in a separated PR.

**NOTE**
**This PR disables bias in type embedding in all backends, which is a
breaking change.**




<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced `use_tebd_bias` and `bias` parameters across various
components to control the use of bias in type embeddings and networks.
  
- **Updates**
- Updated serialization and deserialization methods to include the new
parameters and ensure version compatibility.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking change Breaking changes that should notify users. Python
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants