Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make ConstantLengthDataset (or packing=True) shuffle examples before they are packed #2037

Merged
merged 5 commits into from
Sep 13, 2024

Conversation

muupan
Copy link
Contributor

@muupan muupan commented Sep 8, 2024

What does this PR do?

To address #2030, this PR changes the behavior of ConstantLengthDataset when shuffle=True so that it shuffles the examples before they are packed. Shuffling after packing is kept, otherwise split examples occur in consecutive tensors.

Example code (same as #2030):

from datasets import Dataset
from trl.trainer import ConstantLengthDataset

# Dataset with "000", "111", ..., "777"
def gen():
    for i in range(8):
        yield {"text": f"{i}" * 3}
dataset = Dataset.from_generator(gen)

class FakeTokenizer:
    # Tokenizer that just converts "000" to [0, 0, 0], etc. EOS token is 8.
    def __init__(self):
        self.eos_token_id = 8
    
    def __call__(self, texts, **kwargs):
        return {"input_ids": [[int(x) for x in text] for text in texts]}

packed_dataset = ConstantLengthDataset(
    tokenizer=FakeTokenizer(),
    dataset=dataset,
    dataset_text_field="text",
    seq_length=7,
    infinite=False,
    chars_per_token=1,
    num_of_sequences=100,
    shuffle=True,
    append_concat_token=True,
    add_special_tokens=True,
)
print("First epoch")
for x in packed_dataset:
    print(x)
print("Second epoch")
for x in packed_dataset:
    print(x)

Output before this PR (trl==0.10.1):

First epoch
{'input_ids': tensor([0, 0, 0, 8, 1, 1, 1]), 'labels': tensor([0, 0, 0, 8, 1, 1, 1])}
{'input_ids': tensor([8, 2, 2, 2, 8, 3, 3]), 'labels': tensor([8, 2, 2, 2, 8, 3, 3])}
{'input_ids': tensor([5, 5, 8, 6, 6, 6, 8]), 'labels': tensor([5, 5, 8, 6, 6, 6, 8])}
{'input_ids': tensor([3, 8, 4, 4, 4, 8, 5]), 'labels': tensor([3, 8, 4, 4, 4, 8, 5])}
Second epoch
{'input_ids': tensor([3, 8, 4, 4, 4, 8, 5]), 'labels': tensor([3, 8, 4, 4, 4, 8, 5])}
{'input_ids': tensor([8, 2, 2, 2, 8, 3, 3]), 'labels': tensor([8, 2, 2, 2, 8, 3, 3])}
{'input_ids': tensor([0, 0, 0, 8, 1, 1, 1]), 'labels': tensor([0, 0, 0, 8, 1, 1, 1])}
{'input_ids': tensor([5, 5, 8, 6, 6, 6, 8]), 'labels': tensor([5, 5, 8, 6, 6, 6, 8])}

Output after this PR:

First epoch
{'input_ids': tensor([2, 2, 8, 5, 5, 5, 8]), 'labels': tensor([2, 2, 8, 5, 5, 5, 8])}
{'input_ids': tensor([1, 1, 1, 8, 7, 7, 7]), 'labels': tensor([1, 1, 1, 8, 7, 7, 7])}
{'input_ids': tensor([4, 8, 3, 3, 3, 8, 2]), 'labels': tensor([4, 8, 3, 3, 3, 8, 2])}
{'input_ids': tensor([8, 6, 6, 6, 8, 4, 4]), 'labels': tensor([8, 6, 6, 6, 8, 4, 4])}
Second epoch
{'input_ids': tensor([4, 4, 4, 8, 1, 1, 1]), 'labels': tensor([4, 4, 4, 8, 1, 1, 1])}
{'input_ids': tensor([2, 2, 8, 7, 7, 7, 8]), 'labels': tensor([2, 2, 8, 7, 7, 7, 8])}
{'input_ids': tensor([3, 8, 6, 6, 6, 8, 2]), 'labels': tensor([3, 8, 6, 6, 6, 8, 2])}
{'input_ids': tensor([8, 0, 0, 0, 8, 3, 3]), 'labels': tensor([8, 0, 0, 0, 8, 3, 3])}

Fixes #2030

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@qgallouedec

@muupan muupan force-pushed the feature/shuffle-before-packing branch from 56a08d1 to 695570c Compare September 9, 2024 09:17
Copy link
Member

@lewtun lewtun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clean solution @muupan - LGTM!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@muupan
Copy link
Contributor Author

muupan commented Sep 11, 2024

@lewtun Thanks for your approval! I have merged main via the "Update branch" button just now, and it still says "2 workflows awaiting approval". Is there anything I need to do to merge this PR?

@lewtun
Copy link
Member

lewtun commented Sep 12, 2024

@lewtun Thanks for your approval! I have merged main via the "Update branch" button just now, and it still says "2 workflows awaiting approval". Is there anything I need to do to merge this PR?

I've just run the CI, so will merge if it's green :)

@lewtun lewtun merged commit 7a2bbe3 into huggingface:main Sep 13, 2024
9 checks passed
@muupan muupan deleted the feature/shuffle-before-packing branch September 13, 2024 12:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ConstantLengthDataset should shuffle the order of samples before packing
3 participants