Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate: replace breaks by a loop condition #29662

Merged
merged 3 commits into from
Mar 15, 2024

Conversation

gante
Copy link
Member

@gante gante commented Mar 14, 2024

What does this PR do?

Pulled from the torch.compile(..., fullgraph=True) draft PR: #29374

It replaces the breaks that exit the endless generation loop with an equivalent function that returns False when it should stop generating, while preserving ZeRO stage 3 support. It is not only an improvement in terms of code reuse, but also a hard requirement to enable torch.compile(..., fullgraph=True): break and data-dependent control flow is not supported.


if this_peer_finished and not synced_gpus:
break
this_peer_finished = unfinished_sequences.max() == 0
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous version is also a data-dependent control flow, so this change is for torch.compile readiness :)

@gante gante marked this pull request as ready for review March 14, 2024 20:05
@gante gante requested a review from amyeroberts March 14, 2024 20:05
@gante
Copy link
Member Author

gante commented Mar 14, 2024

FYI @zucchini-nlp (the stopping criteria solution did not preserve ZeRO stage 3 support)

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this - so much cleaner 🤩

src/transformers/generation/utils.py Outdated Show resolved Hide resolved
Comment on lines 1795 to 1798
else:
if this_peer_finished:
return False
return True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or actually, we can just do

Suggested change
else:
if this_peer_finished:
return False
return True
return not this_peer_finished

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This solution can return False when synced_gpus is True and this_peer_finished is True, which is not intended -- this_peer_finished has to be True in all distributed devices when synced_gpus is True 🤗

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
@gante gante merged commit 9e4df7c into huggingface:main Mar 15, 2024
21 checks passed
@gante gante deleted the loop_condition branch March 15, 2024 17:49
itazap pushed a commit that referenced this pull request May 14, 2024
* replace breaks by a loop condition

* Update src/transformers/generation/utils.py

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants