Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix test_script.py on TPU v2/v3 #2542

Merged
merged 7 commits into from
Mar 13, 2024

Conversation

vanbasten23
Copy link
Contributor

@vanbasten23 vanbasten23 commented Mar 10, 2024

What does this PR do?

Fixes #2479

  • This PR fix the dataloader test: The same test accelerate test succeeds on TPU v4 but fails on v2/v3. The reason why it fails on TPU v2/v3 is multithreading. TPU v2/v3 involves multithreading: there are 4 processes and 8 TPU devices on the TPU v3-8, and each process spawns 2 threads for each device respectively. However TPU v4 and further TPU generations doesn't use multithreading. On TPU v2/v3, due to multithreading, we need to explicitly set the seed on each thread. Otherwise, each thread will permute the dataset indices in a different way and therefore generate the duplicate indices as shown in the gh issue. This PR explicitly set the generator for each thread and fix the test failure.
  • This PR also fix a xm.set_replication issue for TPU v2/v3.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@vanbasten23 vanbasten23 changed the title [DO NOT REVIEW YET] Fix test_script.py on TPU v2/v3 Fix test_script.py on TPU v2/v3 Mar 11, 2024
@vanbasten23
Copy link
Contributor Author

cc @will-cromar @muellerzr @SunMarc

@vanbasten23 vanbasten23 marked this pull request as ready for review March 11, 2024 18:02
Copy link
Collaborator

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fantastic fix!

Looks like there's some style nits here, can you do make style; make quality after doing pip install -e .[quality]? Thanks!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

@will-cromar will-cromar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @vanbasten23!

src/accelerate/data_loader.py Outdated Show resolved Hide resolved
Comment on lines 885 to 890
if isinstance(dataloader.sampler, RandomSampler) and state.distributed_type == DistributedType.XLA:
# isinstance(dataloader.sampler, RandomSampler) indicates the original dataloader has `shuffle` enabled.
generator = torch.Generator().manual_seed(42)
dataloader.generator = generator
dataloader.sampler.generator = generator
dataloader.batch_sampler.sampler = dataloader.sampler
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same reasoning, that doesn't do anything. We need to adjust sampler, we should never be modifying the dataloader directly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually modifying the dataloader has some impact on the new dataloader: #2542 (comment)

If I don't do dataloader.generator = generator, the test would fail a later test with an error AssertionError: Did not obtain the same model on CPU or distributed training.: https://gist.github.com/vanbasten23/ce6d416859b89c23cfd72e272d356504.

So I think it's needed.

Copy link
Collaborator

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this! Overall it looks fine to me, I'd like if possible for you to verify that things will still work fine if you run the code with seedable_sampler set to True though before merging to know if we need to include it there as well.

(I'd rather wish we kept things without needing to modify dataloader directly, but if that's not possible then it's okay)

@vanbasten23
Copy link
Contributor Author

Thanks for doing this! Overall it looks fine to me, I'd like if possible for you to verify that things will still work fine if you run the code with seedable_sampler set to True though before merging to know if we need to include it there as well.

(I'd rather wish we kept things without needing to modify dataloader directly, but if that's not possible then it's okay)

Sure. I set use_seedable_sampler=True at

dl = prepare_data_loader(dl, state.device, state.num_processes, state.process_index, put_on_device=True)
and also at line 230 and the test accelerate test passed. I also change the default value of use_seeable_sampler to True in def prepare_data_loader and the test accelerate test passed too.

@muellerzr muellerzr merged commit 02a8a9a into huggingface:main Mar 13, 2024
23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Not to use_seedable_sampler causes accelerate test failure on TPU v2/v3
4 participants