Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IMP] Set the Default WG Memory Type to 'distributed' for the MNMG PyG Example #4532

Merged
merged 2 commits into from
Jul 15, 2024

Conversation

alexbarghi-nv
Copy link
Member

For MNMG processing, the only supported memory type is distributed. While it is possible to test using torchrun on a single machine, when running on multiple machines, or a SLURM cluster, etc., anything other than distributed won't work. Therefore, this PR makes distributed the default memory type for that example to avoid user confusion.

@alexbarghi-nv alexbarghi-nv self-assigned this Jul 11, 2024
@alexbarghi-nv alexbarghi-nv added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Jul 11, 2024
@alexbarghi-nv alexbarghi-nv added this to the 24.08 milestone Jul 11, 2024
@alexbarghi-nv alexbarghi-nv marked this pull request as ready for review July 11, 2024 17:32
@alexbarghi-nv alexbarghi-nv requested a review from a team as a code owner July 11, 2024 17:32
@alexbarghi-nv
Copy link
Member Author

/merge

@rapids-bot rapids-bot bot merged commit 5cdef4e into rapidsai:branch-24.08 Jul 15, 2024
131 checks passed
@alexbarghi-nv alexbarghi-nv deleted the default-distributed branch July 15, 2024 20:48
BradReesWork added a commit to rapidsai/cugraph-gnn that referenced this pull request Jul 15, 2024
[IMP] Set the Default WG Memory Type to 'distributed' for the MNMG PyG Example (rapidsai/cugraph#4532)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement / enhancement to an existing function non-breaking Non-breaking change python
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants