-
Notifications
You must be signed in to change notification settings - Fork 970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix test_script.py on TPU v2/v3 #2542
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fantastic fix!
Looks like there's some style nits here, can you do make style; make quality
after doing pip install -e .[quality]
? Thanks!
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @vanbasten23!
src/accelerate/data_loader.py
Outdated
if isinstance(dataloader.sampler, RandomSampler) and state.distributed_type == DistributedType.XLA: | ||
# isinstance(dataloader.sampler, RandomSampler) indicates the original dataloader has `shuffle` enabled. | ||
generator = torch.Generator().manual_seed(42) | ||
dataloader.generator = generator | ||
dataloader.sampler.generator = generator | ||
dataloader.batch_sampler.sampler = dataloader.sampler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same reasoning, that doesn't do anything. We need to adjust sampler
, we should never be modifying the dataloader directly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually modifying the dataloader has some impact on the new dataloader: #2542 (comment)
If I don't do dataloader.generator = generator
, the test would fail a later test with an error AssertionError: Did not obtain the same model on CPU or distributed training.
: https://gist.github.com/vanbasten23/ce6d416859b89c23cfd72e272d356504.
So I think it's needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing this! Overall it looks fine to me, I'd like if possible for you to verify that things will still work fine if you run the code with seedable_sampler
set to True
though before merging to know if we need to include it there as well.
(I'd rather wish we kept things without needing to modify dataloader
directly, but if that's not possible then it's okay)
Sure. I set
accelerate test passed. I also change the default value of use_seeable_sampler to True in def prepare_data_loader and the test accelerate test passed too.
|
What does this PR do?
Fixes #2479
accelerate test
succeeds on TPU v4 but fails on v2/v3. The reason why it fails on TPU v2/v3 is multithreading. TPU v2/v3 involves multithreading: there are 4 processes and 8 TPU devices on the TPU v3-8, and each process spawns 2 threads for each device respectively. However TPU v4 and further TPU generations doesn't use multithreading. On TPU v2/v3, due to multithreading, we need to explicitly set the seed on each thread. Otherwise, each thread will permute the dataset indices in a different way and therefore generate the duplicate indices as shown in the gh issue. This PR explicitly set thegenerator
for each thread and fix the test failure.xm.set_replication
issue for TPU v2/v3.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.