Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add row-decomposition of adj. matrix to reduce graph partitioning overhead #720

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

chang-l
Copy link
Contributor

@chang-l chang-l commented Nov 20, 2024

Modulus Pull Request

Description

This PR introduces a new graph partitioning scheme for Modulus distributed GNN models (tested only with MeshGraphNet), which evenly decomposes the adjacency matrix row-wise, effectively eliminating most of the graph partitioning overhead during training.

In the MeshGraphNet model, the 1-D decomposition evenly splits the global_offsets across all ranks (i.e., distributing the nodes evenly among ranks), followed by the corresponding global_indices (which represent all incoming edges for the local nodes). Both the node feature store (node emb matrix) and edge feature store (edge emb matrix) follow this 1D decomposition scheme, and no need to distinguish between src or dst node features store. Implicitly, we assume the adjacency matrix is square, meaning the source and destination node domains are identical, i.e., the graph is not bipartite.

To update local edge store from local node store, all-to-all communication is needed for each message-passing layer to gather updated non-local node (but neighbor node) features, which are then used to update the edge feature store.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.
  • The CHANGELOG.md is up to date with these changes.
  • An issue is linked to this pull request.

Dependencies

To enable this matrix decomp scheme, developers need to pass matrix_decomp=True to partition_graph_nodewise() function

@mnabian @stadlmax @Alexey-Kamenev Can you please help review this PR?

@mnabian
Copy link
Collaborator

mnabian commented Nov 26, 2024

Please update the CHANGELOG

@@ -769,6 +908,12 @@ def get_src_node_features_in_partition(
) -> torch.Tensor: # pragma: no cover
# if global features only on local rank 0 also scatter, split them
# according to the partition and scatter them to other ranks

if self.graph_partition.matrix_decomp:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, one shouldn't make the code using these utilities dependent on how a graph is partitioned. Couldn't one instead of throwing this error just use get_dst_node_features underneath?

@@ -872,6 +1017,12 @@ def get_global_src_node_features(
if partitioned_node_features.device != self.device:
raise AssertionError(error_msg)

if self.graph_partition.matrix_decomp:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants