-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[finetune-llm] Training configs for 70b full parameter finetuning #85
Merged
Merged
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
e55beb9
add
amogkam 5911277
Merge branch 'main' of github.com:anyscale/templates into main
amogkam 953e110
updated configs
amogkam 03a6991
update
amogkam 7bcd588
add chat
amogkam 9c62e68
Merge branch 'main' of github.com:anyscale/templates into 70b-finetune
amogkam File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
17 changes: 17 additions & 0 deletions
17
templates/fine-tune-llm/training_configs/full_param/llama-2-70b-4k-2xp4de_24xlarge.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
model_id: meta-llama/Llama-2-70b-hf # <-- change this to the model you want to fine-tune | ||
train_path: s3://air-example-data/gsm8k/train.jsonl # <-- change this to the path to your training data | ||
valid_path: s3://air-example-data/gsm8k/test.jsonl # <-- change this to the path to your validation data. This is optional | ||
context_length: 4096 # <-- change this to the context length you want to use | ||
num_devices: 16 # <-- change this to total number of GPUs that you want to use | ||
num_epochs: 1 # <-- change this to the number of epochs that you want to train for | ||
train_batch_size_per_device: 1 | ||
eval_batch_size_per_device: 1 | ||
learning_rate: 5e-6 | ||
num_checkpoints_to_keep: 1 | ||
dataset_size_scaling_factor: 10000 | ||
output_dir: /mnt/local_storage | ||
deepspeed: | ||
config_path: deepspeed_configs/zero_3_llama_2_70b.json | ||
flash_attention_2: True | ||
worker_resources: | ||
p4de.24xlarge: 1 # <-- this maps to job_compute_configs file's custom_resources so the appropriate nodes can scale up |
17 changes: 17 additions & 0 deletions
17
templates/fine-tune-llm/training_configs/full_param/llama-2-70b-chat-4k-2xp4de_24xlarge.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
model_id: meta-llama/Llama-2-70b-chat-hf # <-- change this to the model you want to fine-tune | ||
train_path: s3://air-example-data/gsm8k/train.jsonl # <-- change this to the path to your training data | ||
valid_path: s3://air-example-data/gsm8k/test.jsonl # <-- change this to the path to your validation data. This is optional | ||
context_length: 4096 # <-- change this to the context length you want to use | ||
num_devices: 16 # <-- change this to total number of GPUs that you want to use | ||
num_epochs: 1 # <-- change this to the number of epochs that you want to train for | ||
train_batch_size_per_device: 1 | ||
eval_batch_size_per_device: 1 | ||
learning_rate: 5e-6 | ||
num_checkpoints_to_keep: 1 | ||
dataset_size_scaling_factor: 10000 | ||
output_dir: /mnt/local_storage | ||
deepspeed: | ||
config_path: deepspeed_configs/zero_3_llama_2_70b.json | ||
flash_attention_2: True | ||
worker_resources: | ||
p4de.24xlarge: 1 # <-- this maps to job_compute_configs file's custom_resources so the appropriate nodes can scale up |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does this do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This removes llmforge's restriction on dataset size. This is currently a workaround, but in the future we should remove the default dataset size restriction in llmforge and only enable it for public endpoints. This is being tracked and I will make an issue for this.