-
Notifications
You must be signed in to change notification settings - Fork 563
Enable proper optimizer state storing + Test between batches #3053
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This pull request was exported from Phabricator. Differential Revision: D75565054 |
aporialiao
added a commit
to aporialiao/torchrec
that referenced
this pull request
Jun 6, 2025
…torch#3053) Summary: # Main Changes 1. Enable unit test with an adaptive optimizer `Adagrad` 1. Previously I tested the optimizer state with an optimizer `SGD` that is static throughout training so didn't , instead here I used the `Adagrad` which exposed the previous implementation did not properly store optimziers. 2. Properly store optimizer state in `update_optimizer_state` 2. Append optimizer tensors as inputs to the all2all call, then parse through the output tensors to store the right tensors. 2. Optimizer tensors that did not need to be sent to a new rank are persisted and resaved. 2. After new lookups are created, use `load_state_dict` to load in the saved optimizer state to the current optimizers. 3. Helpers & other small changes 3. Helper to compare optimizer tensors for unit tests 3. Update `DMP` reshard - optimizer saving to match the same fqn Differential Revision: D75565054
6822ac4
to
afb8577
Compare
This pull request was exported from Phabricator. Differential Revision: D75565054 |
aporialiao
added a commit
to aporialiao/torchrec
that referenced
this pull request
Jun 6, 2025
…torch#3053) Summary: # Main Changes 1. Enable unit test with an adaptive optimizer `Adagrad` 1. Previously I tested the optimizer state with an optimizer `SGD` that is static throughout training so didn't actually test if we stored opt state, instead here I used the `Adagrad` which exposed the previous implementation did not properly store optimziers. 2. Properly store optimizer state in `update_optimizer_state` 2. Append optimizer tensors as inputs to the all2all call, then parse through the output tensors to store the right tensors. 2. Optimizer tensors that did not need to be sent to a new rank are persisted and resaved. 2. After new lookups are created, use `load_state_dict` to load in the saved optimizer state to the current optimizers. 3. Helpers & other small changes 3. Helper to compare optimizer tensors for unit tests 3. Update `DMP` reshard - optimizer saving to match the same fqn Reviewed By: aliafzal Differential Revision: D75565054
afb8577
to
fd4f611
Compare
This pull request was exported from Phabricator. Differential Revision: D75565054 |
aporialiao
added a commit
to aporialiao/torchrec
that referenced
this pull request
Jun 6, 2025
…torch#3053) Summary: # Main Changes 1. Enable unit test with an adaptive optimizer `Adagrad` 1. Previously I tested the optimizer state with an optimizer `SGD` that is static throughout training so didn't actually test if we stored opt state, instead here I used the `Adagrad` which exposed the previous implementation did not properly store optimziers. 2. Properly store optimizer state in `update_optimizer_state` 2. Append optimizer tensors as inputs to the all2all call, then parse through the output tensors to store the right tensors. 2. Optimizer tensors that did not need to be sent to a new rank are persisted and resaved. 2. After new lookups are created, use `load_state_dict` to load in the saved optimizer state to the current optimizers. 3. Helpers & other small changes 3. Helper to compare optimizer tensors for unit tests 3. Update `DMP` reshard - optimizer saving to match the same fqn Reviewed By: aliafzal Differential Revision: D75565054
fd4f611
to
4a7c601
Compare
This pull request was exported from Phabricator. Differential Revision: D75565054 |
…torch#3053) Summary: Pull Request resolved: meta-pytorch#3053 # Main Changes 1. Enable unit test with an adaptive optimizer `Adagrad` 1. Previously I tested the optimizer state with an optimizer `SGD` that is static throughout training so didn't actually test if we stored opt state, instead here I used the `Adagrad` which exposed the previous implementation did not properly store optimziers. 2. Properly store optimizer state in `update_optimizer_state` 2. Append optimizer tensors as inputs to the all2all call, then parse through the output tensors to store the right tensors. 2. Optimizer tensors that did not need to be sent to a new rank are persisted and resaved. 2. After new lookups are created, use `load_state_dict` to load in the saved optimizer state to the current optimizers. 3. Helpers & other small changes 3. Helper to compare optimizer tensors for unit tests 3. Update `DMP` reshard - optimizer saving to match the same fqn Reviewed By: aliafzal Differential Revision: D75565054
This pull request was exported from Phabricator. Differential Revision: D75565054 |
4a7c601
to
f706b80
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
fb-exported
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Main Changes
Adagrad
SGD
that is static throughout training so didn't , instead here I used theAdagrad
which exposed the previous implementation did not properly store optimziers.update_optimizer_state
2. Append optimizer tensors as inputs to the all2all call, then parse through the output tensors to store the right tensors.
2. Optimizer tensors that did not need to be sent to a new rank are persisted and resaved.
2. After new lookups are created, use
load_state_dict
to load in the saved optimizer state to the current optimizers.3. Helper to compare optimizer tensors for unit tests
3. Update
DMP
reshard - optimizer saving to match the same fqnDifferential Revision: D75565054