-
Notifications
You must be signed in to change notification settings - Fork 599
model fragments for diloco #1446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Closed
be87993 to
04977f1
Compare
Closed
67b20d0 to
2926160
Compare
321a888 to
d67485a
Compare
Contributor
Author
|
Discussed offline with @tianyu-l. Planning to simplify some of this and keep the changes to |
tianyu-l
reviewed
Jul 28, 2025
b7d7242 to
bff2c52
Compare
This was referenced Jul 31, 2025
Merged
ef69776 to
a4284e0
Compare
9e317da to
ac9ec1f
Compare
4377685 to
0a8e148
Compare
56cb433 to
ec935f5
Compare
Summary: - add a configuration option for users to provide how they want to partition the model - if this is provided, the model needs to implement `FaultTolerantTrainingSpec` that defines the framentation function to split the model based on the configuration - determine the model fragments in training script to pass to ft manager Test Plan: Running llama3 8b parameters with 2 fragments, 1 step delay, each fragment gets synced every 20 steps <img width="944" height="545" alt="image" src="https://github.com/user-attachments/assets/6d16f486-7260-49d6-8ba3-3e98cd331e58" />
bentherien
pushed a commit
to bentherien/torchtitan_
that referenced
this pull request
Aug 5, 2025
Summary: remove some stale code that determines parameters to pass to outer optimizer --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1501). * pytorch#1446 * pytorch#1502 * __->__ pytorch#1501
bentherien
pushed a commit
to bentherien/torchtitan_
that referenced
this pull request
Aug 5, 2025
Summary: the leaf folder wasn't being created so and no profiles were being written, so create it if it doesn't exist --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1502). * pytorch#1446 * __->__ pytorch#1502 * pytorch#1501
tianyu-l
approved these changes
Aug 5, 2025
Closed
joellidin
pushed a commit
to one-covenant/torchtitan
that referenced
this pull request
Aug 8, 2025
Summary: remove some stale code that determines parameters to pass to outer optimizer --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1501). * pytorch#1446 * pytorch#1502 * __->__ pytorch#1501
joellidin
pushed a commit
to one-covenant/torchtitan
that referenced
this pull request
Aug 8, 2025
Summary: the leaf folder wasn't being created so and no profiles were being written, so create it if it doesn't exist --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1502). * pytorch#1446 * __->__ pytorch#1502 * pytorch#1501
joellidin
pushed a commit
to one-covenant/torchtitan
that referenced
this pull request
Aug 8, 2025
Summary: - add a configuration option for users to provide how they want to partition the model - if this is provided, the model needs to implement `FaultTolerantTrainingSpec` that defines the framentation function to split the model based on the configuration - determine the model fragments in training script to pass to ft manager Test Plan: Running llama3 8b parameters with 2 fragments, 1 step delay, each fragment gets synced every 20 steps <img width="944" height="545" alt="image" src="https://github.com/user-attachments/assets/6d16f486-7260-49d6-8ba3-3e98cd331e58" /> --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1446). * pytorch#1516 * __->__ pytorch#1446
joellidin
pushed a commit
to one-covenant/torchtitan
that referenced
this pull request
Aug 8, 2025
Summary: remove some stale code that determines parameters to pass to outer optimizer --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1501). * pytorch#1446 * pytorch#1502 * __->__ pytorch#1501
joellidin
pushed a commit
to one-covenant/torchtitan
that referenced
this pull request
Aug 8, 2025
Summary: the leaf folder wasn't being created so and no profiles were being written, so create it if it doesn't exist --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1502). * pytorch#1446 * __->__ pytorch#1502 * pytorch#1501
joellidin
pushed a commit
to one-covenant/torchtitan
that referenced
this pull request
Aug 8, 2025
Summary: - add a configuration option for users to provide how they want to partition the model - if this is provided, the model needs to implement `FaultTolerantTrainingSpec` that defines the framentation function to split the model based on the configuration - determine the model fragments in training script to pass to ft manager Test Plan: Running llama3 8b parameters with 2 fragments, 1 step delay, each fragment gets synced every 20 steps <img width="944" height="545" alt="image" src="https://github.com/user-attachments/assets/6d16f486-7260-49d6-8ba3-3e98cd331e58" /> --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/torchtitan/pull/1446). * pytorch#1516 * __->__ pytorch#1446
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
FaultTolerantTrainingSpecthat defines the framentation function to split the model based on the configurationTest Plan:
Running llama3 8b parameters with 2 fragments, 1 step delay, each fragment gets synced every 20 steps
Stack created with Sapling. Best reviewed with ReviewStack.