Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding max_seq_length to vision eval config #1802

Merged

Conversation

SalmanMohammadi
Copy link
Collaborator

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

I added max_seq_length back to the eval recipe with #1773 but missed this config.

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Oct 10, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1802

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 51f6ed1 with merge base 5de5001 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 10, 2024
@RdoubleA
Copy link
Contributor

Mind running this config real quick? I'm assuming going from not specifying the sequence length to setting it to 8192 should not increase memory usage or anything, but good to be sure

@SalmanMohammadi
Copy link
Collaborator Author

SalmanMohammadi commented Oct 11, 2024

Mind running this config real quick? I'm assuming going from not specifying the sequence length to setting it to 8192 should not increase memory usage or anything, but good to be sure

So I tested this with the original PR that landed it and for MM #1763 where I noticed the bug and fixed it. The config won't work without this key. @joecummings it out so i factored it out to ensure it doesn't exist on main.

@joecummings joecummings merged commit c4044bc into pytorch:main Oct 11, 2024
17 checks passed
@SalmanMohammadi SalmanMohammadi deleted the add_max_seq_length_eval_config branch October 11, 2024 11:29
mori360 pushed a commit to mori360/torchtune that referenced this pull request Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants