Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update _text_completion.py to support packed mode #1061

Conversation

andyl98
Copy link
Contributor

@andyl98 andyl98 commented Jun 6, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.
GitHub issue: #1058

Changelog

What are the changes made in this PR?
Add support for packed mode inside _text_completion.py

Test plan

Please make sure to do each of the following if applicable to your PR. (If you're not sure about any one of these just ask and we will happily help.)

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
    • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

Copy link

pytorch-bot bot commented Jun 6, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1061

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 6631b07 with merge base f9cb9e6 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @andyl98!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 6, 2024
@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@RdoubleA
Copy link
Contributor

RdoubleA commented Jun 6, 2024

Thanks for this update. Indeed, having packing for text completion datasets is almost required for continued pre-training. I purposely left this out because I was concerned by the memory cost of packing, which would be more problematic for text completion datasets that tend to be much larger. But for local text corpuses this should be okay. Are you trying to train on local data?

The code change looks good to me, do you mind just sharing a distributed run on text_completion_dataset both with and without packing? Curious to see if the performance gains are similar to instruct and chat, also good to ensure that you don't run into any OOMs :)

@andyl98
Copy link
Contributor Author

andyl98 commented Jun 6, 2024

The code change looks good to me, do you mind just sharing a distributed run on text_completion_dataset both with and without packing? Curious to see if the performance gains are similar to instruct and chat, also good to ensure that you don't run into any OOMs :)

Sounds good, will update the results once a training run is done :) And yes, I'm training on local data with a small set (<1B tokens in total).

torchtune/datasets/_text_completion.py Outdated Show resolved Hide resolved
torchtune/datasets/_text_completion.py Outdated Show resolved Hide resolved
@RdoubleA
Copy link
Contributor

Hi @andyl98, were you able to try launching a run with this change? If you're having trouble let me know, and I could also launch a run on my end and get this merged in.

@andyl98
Copy link
Contributor Author

andyl98 commented Jun 11, 2024

Screenshot 2024-06-10 at 7 47 18 PM Hi @RdoubleA sorry wasn't able to work on the project. I tested with small dataset and the results look as expected. However I do recognize the memory consumption issue with a larger dataset. Hopefully the mmap dataset feature will come soon!

@RdoubleA
Copy link
Contributor

Appreciate you launching a test run! Looks good to me and it's great to see a similar bump in QPS for text completion.

Yes, I am working on a more memory efficient dataset implementation, hopefully that will unlock using packing with any size dataset 😎

I'll run the CI and if there's no other issues get this merged. Thanks again!

@RdoubleA RdoubleA merged commit 7d9b9c8 into pytorch:main Jun 11, 2024
29 checks passed
maximegmd pushed a commit to maximegmd/torchtune that referenced this pull request Jul 13, 2024
Co-authored-by: RdoubleA <rafiayub@fb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants