Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update configs #2128

Merged
merged 1 commit into from
Dec 6, 2024
Merged

update configs #2128

merged 1 commit into from
Dec 6, 2024

Conversation

felipemello1
Copy link
Contributor

@felipemello1 felipemello1 commented Dec 6, 2024

update 70B configs (and some others)

Copy link

pytorch-bot bot commented Dec 6, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2128

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 16486db with merge base 424ffc3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 6, 2024
@@ -107,8 +109,7 @@ dtype: bf16
# Logging
metric_logger:
_component_: torchtune.training.metric_logging.DiskLogger
log_dir: ${output_dir}
output_dir: /tmp/full-llama3_3-finetune
log_dir: ${output_dir}/logs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is not consistent with other configs? Maybe update all of the rest configs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The others were updated yesterday :)

Copy link
Contributor

@acisseJZhong acisseJZhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, except the metrics log directory need to be consistent with the rest of configs?

@felipemello1 felipemello1 merged commit 26b2200 into pytorch:main Dec 6, 2024
17 checks passed
@felipemello1 felipemello1 deleted the update_cfg_3 branch December 6, 2024 23:59
rahul-sarvam added a commit to sarvamai/torchtune that referenced this pull request Dec 8, 2024
* Llama 3.3 70B (pytorch#2124)

* Llama 3.3 readme updates (pytorch#2125)

* update configs (pytorch#2107)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* Reduce logging output for distributed KD (pytorch#2120)

* Support Early Exit Loss and/or Layer Dropout (pytorch#1076)

Co-authored-by: ebsmothers <ebs@meta.com>

* Update checkpointing directory (pytorch#2074)

Co-authored-by: Felipe Mello <felipemello@fb.com>
Co-authored-by: vancoyendall <vancoykendall@gmail.com>

* pass correct arg (pytorch#2127)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* update configs (pytorch#2128)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* fix qat_lora_test (pytorch#2131)

Co-authored-by: Felipe Mello <felipemello@fb.com>

---------

Co-authored-by: Philip Bontrager <pbontrager@gmail.com>
Co-authored-by: ebsmothers <ebs@meta.com>
Co-authored-by: Felipe Mello <fmellomascarenhas@gmail.com>
Co-authored-by: Felipe Mello <felipemello@fb.com>
Co-authored-by: Joe Cummings <jrcummings27@gmail.com>
Co-authored-by: Mostafa Elhoushi <m.elhoushi@ieee.org>
Co-authored-by: vancoyendall <vancoykendall@gmail.com>
rahul-sarvam added a commit to sarvamai/torchtune that referenced this pull request Dec 9, 2024
* Llama 3.3 70B (pytorch#2124)

* Llama 3.3 readme updates (pytorch#2125)

* update configs (pytorch#2107)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* Reduce logging output for distributed KD (pytorch#2120)

* Support Early Exit Loss and/or Layer Dropout (pytorch#1076)

Co-authored-by: ebsmothers <ebs@meta.com>

* Update checkpointing directory (pytorch#2074)

Co-authored-by: Felipe Mello <felipemello@fb.com>
Co-authored-by: vancoyendall <vancoykendall@gmail.com>

* pass correct arg (pytorch#2127)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* update configs (pytorch#2128)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* fix qat_lora_test (pytorch#2131)

Co-authored-by: Felipe Mello <felipemello@fb.com>

---------

Co-authored-by: Philip Bontrager <pbontrager@gmail.com>
Co-authored-by: ebsmothers <ebs@meta.com>
Co-authored-by: Felipe Mello <fmellomascarenhas@gmail.com>
Co-authored-by: Felipe Mello <felipemello@fb.com>
Co-authored-by: Joe Cummings <jrcummings27@gmail.com>
Co-authored-by: Mostafa Elhoushi <m.elhoushi@ieee.org>
Co-authored-by: vancoyendall <vancoykendall@gmail.com>
rahul-sarvam added a commit to sarvamai/torchtune that referenced this pull request Dec 18, 2024
* Llama 3.3 70B (pytorch#2124)

* Llama 3.3 readme updates (pytorch#2125)

* update configs (pytorch#2107)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* Reduce logging output for distributed KD (pytorch#2120)

* Support Early Exit Loss and/or Layer Dropout (pytorch#1076)

Co-authored-by: ebsmothers <ebs@meta.com>

* Update checkpointing directory (pytorch#2074)

Co-authored-by: Felipe Mello <felipemello@fb.com>
Co-authored-by: vancoyendall <vancoykendall@gmail.com>

* pass correct arg (pytorch#2127)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* update configs (pytorch#2128)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* fix qat_lora_test (pytorch#2131)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* guard ckpt imports (pytorch#2133)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* [bug fix] add parents=True (pytorch#2136)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* [bug fix] re-add model (pytorch#2135)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* Update save sizes into GiB (pytorch#2143)

* [bug fix] remove config download when source is kaggle (pytorch#2144)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* [fix] remove "with_suffix" (pytorch#2146)

Co-authored-by: Felipe Mello <felipemello@fb.com>

* DoRA fixes (pytorch#2139)



Co-authored-by: Mircea Mironenco <5738815+mirceamironenco@users.noreply.github.com>

* [Fix] Llama 3.2 Vision decoder_trainable flag fixed (pytorch#2150)

* Small readme, config updates (pytorch#2157)

* Using `FormattedCheckpointFiles` in configs (pytorch#2147)

* Move ``get_world_size_and_rank`` to utils (pytorch#2155)

* Faster intermediate checkpoints with DCP async save in TorchTune (pytorch#2006)

Co-authored-by: Saurabh Mishra <msaurabh@fb.com>

* torchdata integration - multi-dataset and streaming support (pytorch#1929)

* Allow higher version of lm-eval (pytorch#2165)

* Using `FormattedCheckpointFiles` in configs... round 2 (pytorch#2167)

* [EZ] Fix set_torch_num_threads in multi-node. (pytorch#2164)

---------

Co-authored-by: Philip Bontrager <pbontrager@gmail.com>
Co-authored-by: ebsmothers <ebs@meta.com>
Co-authored-by: Felipe Mello <fmellomascarenhas@gmail.com>
Co-authored-by: Felipe Mello <felipemello@fb.com>
Co-authored-by: Joe Cummings <jrcummings27@gmail.com>
Co-authored-by: Mostafa Elhoushi <m.elhoushi@ieee.org>
Co-authored-by: vancoyendall <vancoykendall@gmail.com>
Co-authored-by: Mircea Mironenco <5738815+mirceamironenco@users.noreply.github.com>
Co-authored-by: salman <salman.mohammadi@outlook.com>
Co-authored-by: Saurabh Mishra <msaurabh@meta.com>
Co-authored-by: Saurabh Mishra <msaurabh@fb.com>
Co-authored-by: Andrew Ho <andrew.kenneth.ho@gmail.com>
Co-authored-by: Eugen Hotaj <eugen_hotaj_91@hotmail.com>
rahul-sarvam pushed a commit to sarvamai/torchtune that referenced this pull request Dec 23, 2024
Co-authored-by: Felipe Mello <felipemello@fb.com>
rahul-sarvam pushed a commit to sarvamai/torchtune that referenced this pull request Dec 23, 2024
Co-authored-by: Felipe Mello <felipemello@fb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants