-
Notifications
You must be signed in to change notification settings - Fork 679
Qwen2.5 #1863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Qwen2.5 #1863
Changes from all commits
Commits
Show all changes
25 commits
Select commit
Hold shift + click to select a range
5b2acbb
additional special tokens
calvinpelletier 5c18e35
chat template
calvinpelletier 6c7c0a9
.
calvinpelletier faa9249
tool response
calvinpelletier 0af29a1
qwen2.5 model builders
calvinpelletier 28cef3a
qwen2.5 lora builders
calvinpelletier d4a937c
configs
calvinpelletier d5ba267
docstrings
calvinpelletier 0c9eff7
fix
calvinpelletier ab8d452
various
calvinpelletier 590a89c
lint
calvinpelletier 9595a46
separating qwen2 and qwen2.5
calvinpelletier 9522452
unit test for qwen2.5 tokenizer
calvinpelletier d938e3b
Merge remote-tracking branch 'origin/main' into qwen2.5
calvinpelletier 70bbe96
separate model builders for base and instruct models
calvinpelletier a110bbb
moving chat template logic into tokenizer
calvinpelletier ac221f3
tool call special tokens
calvinpelletier a17ba7a
separate qwen2/2.5 tokenizers
calvinpelletier 50232be
configs
calvinpelletier b13de5a
Merge remote-tracking branch 'origin/main' into qwen2.5
calvinpelletier ec905e7
various
calvinpelletier c064d1d
Merge remote-tracking branch 'origin/main' into qwen2.5
calvinpelletier e813076
addressing comments
calvinpelletier c35ec53
adding base/instruct explanations in docstrings
calvinpelletier 9271c5c
registry fix
calvinpelletier File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,77 @@ | ||
| # Config for multi-device full finetuning in full_finetune_distributed.py | ||
| # using a Qwen2.5 0.5B model | ||
| # | ||
| # This config assumes that you've run the following command before launching | ||
| # this run: | ||
| # tune download Qwen/Qwen2.5-0.5B-Instruct --output-dir /tmp/Qwen2_5-0_5B-Instruct --ignore-patterns None | ||
| # | ||
| # To launch on 2 devices, run the following command from root: | ||
| # tune run --nnodes 1 --nproc_per_node 2 full_finetune_distributed --config qwen2_5/0_5B_full | ||
| # | ||
| # You can add specific overrides through the command line. For example | ||
| # to override the checkpointer directory while launching training | ||
| # you can run: | ||
| # tune run --nnodes 1 --nproc_per_node 2 full_finetune_distributed --config qwen2_5/0_5B_full checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR> | ||
| # | ||
| # This config works best when the model is being fine-tuned on 2+ GPUs. | ||
| # Single device full finetuning requires more memory optimizations. It's | ||
| # best to use 0_5B_full_single_device.yaml for those cases | ||
|
|
||
| # Tokenizer | ||
| tokenizer: | ||
| _component_: torchtune.models.qwen2_5.qwen2_5_tokenizer | ||
| path: /tmp/Qwen2_5-0_5B-Instruct/vocab.json | ||
| merges_file: /tmp/Qwen2_5-0_5B-Instruct/merges.txt | ||
| max_seq_len: null | ||
|
|
||
| # Dataset | ||
| dataset: | ||
| _component_: torchtune.datasets.alpaca_cleaned_dataset | ||
| packed: False | ||
| seed: null | ||
| shuffle: True | ||
|
|
||
| # Model Arguments | ||
| model: | ||
| _component_: torchtune.models.qwen2_5.qwen2_5_0_5b | ||
|
|
||
| checkpointer: | ||
| _component_: torchtune.training.FullModelHFCheckpointer | ||
| checkpoint_dir: /tmp/Qwen2_5-0_5B-Instruct | ||
| checkpoint_files: [ | ||
| model.safetensors | ||
| ] | ||
| recipe_checkpoint: null | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-finetune | ||
| model_type: QWEN2 | ||
| resume_from_checkpoint: False | ||
|
|
||
| # Fine-tuning arguments | ||
| batch_size: 2 | ||
| epochs: 1 | ||
| optimizer: | ||
| _component_: torch.optim.AdamW | ||
| fused: True | ||
| lr: 2e-5 | ||
| loss: | ||
| _component_: torchtune.modules.loss.CEWithChunkedOutputLoss | ||
| max_steps_per_epoch: null | ||
| gradient_accumulation_steps: 16 | ||
| compile: False | ||
|
|
||
| # Training env | ||
| device: cuda | ||
|
|
||
| # Memory management | ||
| enable_activation_checkpointing: True | ||
calvinpelletier marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| # Reduced precision | ||
| dtype: bf16 | ||
|
|
||
| # Logging | ||
| metric_logger: | ||
| _component_: torchtune.training.metric_logging.DiskLogger | ||
| log_dir: ${output_dir} | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-finetune | ||
| log_every_n_steps: 1 | ||
| log_peak_memory_stats: False | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,82 @@ | ||
| # Config for single device full finetuning in full_finetune_single_device.py | ||
| # using a Qwen2.5 0.5B | ||
| # | ||
| # This config assumes that you've run the following command before launching | ||
| # this run: | ||
| # tune download Qwen/Qwen2.5-0.5B-Instruct --output-dir /tmp/Qwen2_5-0_5B-Instruct --ignore-patterns None | ||
| # | ||
| # The default config uses an optimizer from bitsandbytes. If you do not have it installed, | ||
| # you can install it with | ||
| # pip install bitsandbytes | ||
| # | ||
| # To launch on a single device, run the following command from root: | ||
| # tune run full_finetune_single_device --config qwen2_5/0_5B_full_single_device | ||
| # | ||
| # You can add specific overrides through the command line. For example | ||
| # to override the checkpointer directory while launching training | ||
| # you can run: | ||
| # tune run full_finetune_single_device --config qwen2_5/0_5B_full_single_device checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR> | ||
| # | ||
| # This config works only for training on single device. | ||
|
|
||
| # Tokenizer | ||
| tokenizer: | ||
| _component_: torchtune.models.qwen2_5.qwen2_5_tokenizer | ||
| path: /tmp/Qwen2_5-0_5B-Instruct/vocab.json | ||
| merges_file: /tmp/Qwen2_5-0_5B-Instruct/merges.txt | ||
| max_seq_len: null | ||
|
|
||
| # Dataset | ||
| dataset: | ||
| _component_: torchtune.datasets.alpaca_cleaned_dataset | ||
| packed: False | ||
| seed: null | ||
| shuffle: True | ||
|
|
||
| # Model Arguments | ||
| model: | ||
| _component_: torchtune.models.qwen2_5.qwen2_5_0_5b | ||
|
|
||
| checkpointer: | ||
| _component_: torchtune.training.FullModelHFCheckpointer | ||
| checkpoint_dir: /tmp/Qwen2_5-0_5B-Instruct | ||
| checkpoint_files: [ | ||
| model.safetensors | ||
| ] | ||
| recipe_checkpoint: null | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-finetune | ||
| model_type: QWEN2 | ||
| resume_from_checkpoint: False | ||
|
|
||
| # Fine-tuning arguments | ||
| batch_size: 2 | ||
| epochs: 1 | ||
| optimizer: | ||
| _component_: torch.optim.AdamW | ||
| fused: True | ||
| lr: 2e-5 | ||
|
|
||
| loss: | ||
| _component_: torchtune.modules.loss.CEWithChunkedOutputLoss | ||
| optimizer_in_bwd: False | ||
|
|
||
| max_steps_per_epoch: null | ||
| gradient_accumulation_steps: 8 | ||
| compile: False | ||
|
|
||
| # Training environment | ||
| device: cuda | ||
|
|
||
| # Memory management | ||
| enable_activation_checkpointing: True | ||
|
|
||
| # Reduced precision | ||
| dtype: bf16 | ||
|
|
||
| # Logging | ||
| metric_logger: | ||
| _component_: torchtune.training.metric_logging.DiskLogger | ||
| log_dir: ${output_dir} | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-finetune | ||
| log_every_n_steps: 1 | ||
| log_peak_memory_stats: False |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,114 @@ | ||
| # Config for multi-device LoRA finetuning in lora_finetune_distributed.py | ||
| # using a Qwen2.5 0.5B model | ||
| # | ||
| # This config assumes that you've run the following command before launching | ||
| # this run: | ||
| # tune download Qwen/Qwen2.5-0.5B-Instruct --output-dir /tmp/Qwen2_5-0_5B-Instruct --ignore-patterns None | ||
| # | ||
| # To launch on 2 devices, run the following command from root: | ||
| # tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2_5/0_5B_lora | ||
| # | ||
| # You can add specific overrides through the command line. For example | ||
| # to override the checkpointer directory while launching training | ||
| # you can run: | ||
| # tune run --nnodes 1 --nproc_per_node 2 lora_finetune_distributed --config qwen2_5/0_5B_lora checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR> | ||
| # | ||
| # This config works best when the model is being fine-tuned on 2+ GPUs. | ||
| # For single device LoRA finetuning please use 0_5B_lora_single_device.yaml | ||
|
|
||
|
|
||
| # Model Arguments | ||
| model: | ||
| _component_: torchtune.models.qwen2_5.lora_qwen2_5_0_5b | ||
| lora_attn_modules: ['q_proj', 'v_proj'] | ||
| apply_lora_to_mlp: False | ||
| apply_lora_to_output: False | ||
| lora_rank: 32 | ||
| lora_alpha: 64 | ||
| lora_dropout: 0.0 | ||
|
|
||
| tokenizer: | ||
| _component_: torchtune.models.qwen2_5.qwen2_5_tokenizer | ||
| path: /tmp/Qwen2_5-0_5B-Instruct/vocab.json | ||
| merges_file: /tmp/Qwen2_5-0_5B-Instruct/merges.txt | ||
| max_seq_len: null | ||
|
|
||
| checkpointer: | ||
| _component_: torchtune.training.FullModelHFCheckpointer | ||
| checkpoint_dir: /tmp/Qwen2_5-0_5B-Instruct | ||
| checkpoint_files: [ | ||
| model.safetensors | ||
| ] | ||
| recipe_checkpoint: null | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-lora-finetune | ||
| model_type: QWEN2 | ||
| resume_from_checkpoint: False | ||
|
|
||
| # Dataset and Sampler | ||
| dataset: | ||
| _component_: torchtune.datasets.alpaca_cleaned_dataset | ||
| packed: False | ||
|
|
||
| seed: null | ||
| shuffle: True | ||
| batch_size: 4 | ||
|
|
||
| # Optimizer and Scheduler | ||
| optimizer: | ||
| _component_: torch.optim.AdamW | ||
| fused: True | ||
| weight_decay: 0.01 | ||
| lr: 2e-3 | ||
|
|
||
| lr_scheduler: | ||
| _component_: torchtune.training.lr_schedulers.get_cosine_schedule_with_warmup | ||
| num_warmup_steps: 100 | ||
|
|
||
| loss: | ||
| _component_: torchtune.modules.loss.CEWithChunkedOutputLoss | ||
|
|
||
| # Training | ||
| epochs: 1 | ||
| max_steps_per_epoch: null | ||
| gradient_accumulation_steps: 4 | ||
| compile: False | ||
|
|
||
| # Logging | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-lora-finetune | ||
| metric_logger: | ||
| _component_: torchtune.training.metric_logging.DiskLogger | ||
| log_dir: ${output_dir} | ||
| log_every_n_steps: 1 | ||
| log_peak_memory_stats: False | ||
|
|
||
| # Environment | ||
| device: cuda | ||
| dtype: bf16 | ||
| enable_activation_checkpointing: True | ||
|
|
||
| # Show case the usage of pytorch profiler | ||
| # Set enabled to False as it's only needed for debugging training | ||
| profiler: | ||
| _component_: torchtune.training.setup_torch_profiler | ||
|
|
||
| enabled: False | ||
|
|
||
| #Output directory of trace artifacts | ||
| output_dir: ${output_dir}/profiling_outputs | ||
|
|
||
| #`torch.profiler.ProfilerActivity` types to trace | ||
| cpu: True | ||
| cuda: True | ||
|
|
||
| #trace options passed to `torch.profiler.profile` | ||
| profile_memory: False | ||
| with_stack: False | ||
| record_shapes: True | ||
| with_flops: False | ||
|
|
||
| # `torch.profiler.schedule` options: | ||
| # wait_steps -> wait, warmup_steps -> warmup, active_steps -> active, num_cycles -> repeat | ||
| wait_steps: 5 | ||
| warmup_steps: 5 | ||
| active_steps: 2 | ||
| num_cycles: 1 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,114 @@ | ||
| # Config for single device LoRA finetuning in lora_finetune_single_device.py | ||
| # using a Qwen2.5 0.5B model | ||
| # | ||
| # This config assumes that you've run the following command before launching | ||
| # this run: | ||
| # tune download Qwen/Qwen2.5-0.5B-Instruct --output-dir /tmp/Qwen2_5-0_5B-Instruct --ignore-patterns None | ||
| # | ||
| # To launch on a single device, run the following command from root: | ||
| # tune run lora_finetune_single_device --config qwen2_5/0_5B_lora_single_device | ||
| # | ||
| # You can add specific overrides through the command line. For example | ||
| # to override the checkpointer directory while launching training | ||
| # you can run: | ||
| # tune run lora_finetune_single_device --config qwen2_5/0_5B_lora_single_device checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR> | ||
| # | ||
| # This config works only for training on single device. | ||
|
|
||
|
|
||
| # Model Arguments | ||
| model: | ||
| _component_: torchtune.models.qwen2_5.lora_qwen2_5_0_5b | ||
| lora_attn_modules: ['q_proj', 'v_proj'] | ||
| apply_lora_to_mlp: False | ||
| apply_lora_to_output: False | ||
| lora_rank: 32 | ||
| lora_alpha: 64 | ||
| lora_dropout: 0.0 | ||
|
|
||
| tokenizer: | ||
| _component_: torchtune.models.qwen2_5.qwen2_5_tokenizer | ||
| path: /tmp/Qwen2_5-0_5B-Instruct/vocab.json | ||
| merges_file: /tmp/Qwen2_5-0_5B-Instruct/merges.txt | ||
| max_seq_len: null | ||
|
|
||
| checkpointer: | ||
| _component_: torchtune.training.FullModelHFCheckpointer | ||
| checkpoint_dir: /tmp/Qwen2_5-0_5B-Instruct | ||
| checkpoint_files: [ | ||
| model.safetensors | ||
| ] | ||
| recipe_checkpoint: null | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-lora-finetune | ||
| model_type: QWEN2 | ||
| resume_from_checkpoint: False | ||
|
|
||
| # Dataset and Sampler | ||
| dataset: | ||
| _component_: torchtune.datasets.alpaca_cleaned_dataset | ||
| packed: False | ||
| seed: null | ||
| shuffle: True | ||
| batch_size: 4 | ||
|
|
||
| # Optimizer and Scheduler | ||
| optimizer: | ||
| _component_: torch.optim.AdamW | ||
| fused: True | ||
| weight_decay: 0.01 | ||
| lr: 2e-3 | ||
|
|
||
| lr_scheduler: | ||
| _component_: torchtune.training.lr_schedulers.get_cosine_schedule_with_warmup | ||
| num_warmup_steps: 100 | ||
|
|
||
| loss: | ||
| _component_: torchtune.modules.loss.CEWithChunkedOutputLoss | ||
|
|
||
| # Training | ||
| epochs: 1 | ||
| max_steps_per_epoch: null | ||
| gradient_accumulation_steps: 4 | ||
| compile: False | ||
|
|
||
| # Logging | ||
| output_dir: /tmp/Qwen2_5-0_5B-Instruct-lora-finetune | ||
| metric_logger: | ||
| _component_: torchtune.training.metric_logging.DiskLogger | ||
| log_dir: ${output_dir} | ||
| log_every_n_steps: 1 | ||
| log_peak_memory_stats: False | ||
|
|
||
| # Environment | ||
| device: cuda | ||
| dtype: bf16 | ||
|
|
||
| # Activations Offloading | ||
| enable_activation_checkpointing: True | ||
| enable_activation_offloading: False | ||
|
|
||
| # Show case the usage of pytorch profiler | ||
| # Set enabled to False as it's only needed for debugging training | ||
| profiler: | ||
| _component_: torchtune.training.setup_torch_profiler | ||
| enabled: False | ||
|
|
||
| #Output directory of trace artifacts | ||
| output_dir: ${output_dir}/profiling_outputs | ||
|
|
||
| #`torch.profiler.ProfilerActivity` types to trace | ||
| cpu: True | ||
| cuda: True | ||
|
|
||
| #trace options passed to `torch.profiler.profile` | ||
| profile_memory: False | ||
| with_stack: False | ||
| record_shapes: True | ||
| with_flops: False | ||
|
|
||
| # `torch.profiler.schedule` options: | ||
| # wait_steps -> wait, warmup_steps -> warmup, active_steps -> active, num_cycles -> repeat | ||
| wait_steps: 5 | ||
| warmup_steps: 5 | ||
| active_steps: 2 | ||
| num_cycles: 1 |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.