-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add row-decomposition of adj. matrix to reduce graph partitioning overhead #720
base: main
Are you sure you want to change the base?
Conversation
Please update the CHANGELOG |
@@ -769,6 +908,12 @@ def get_src_node_features_in_partition( | |||
) -> torch.Tensor: # pragma: no cover | |||
# if global features only on local rank 0 also scatter, split them | |||
# according to the partition and scatter them to other ranks | |||
|
|||
if self.graph_partition.matrix_decomp: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, one shouldn't make the code using these utilities dependent on how a graph is partitioned. Couldn't one instead of throwing this error just use get_dst_node_features
underneath?
@@ -872,6 +1017,12 @@ def get_global_src_node_features( | |||
if partitioned_node_features.device != self.device: | |||
raise AssertionError(error_msg) | |||
|
|||
if self.graph_partition.matrix_decomp: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as above
Modulus Pull Request
Description
This PR introduces a new graph partitioning scheme for Modulus distributed GNN models (tested only with MeshGraphNet), which evenly decomposes the adjacency matrix row-wise, effectively eliminating most of the graph partitioning overhead during training.
In the MeshGraphNet model, the 1-D decomposition evenly splits the global_offsets across all ranks (i.e., distributing the nodes evenly among ranks), followed by the corresponding global_indices (which represent all incoming edges for the local nodes). Both the node feature store (node emb matrix) and edge feature store (edge emb matrix) follow this 1D decomposition scheme, and no need to distinguish between src or dst node features store. Implicitly, we assume the adjacency matrix is square, meaning the source and destination node domains are identical, i.e., the graph is not bipartite.
To update local edge store from local node store, all-to-all communication is needed for each message-passing layer to gather updated non-local node (but neighbor node) features, which are then used to update the edge feature store.
Checklist
Dependencies
To enable this matrix decomp scheme, developers need to pass
matrix_decomp=True
topartition_graph_nodewise()
function@mnabian @stadlmax @Alexey-Kamenev Can you please help review this PR?