Skip to content

Conversation

@wmhst7
Copy link
Contributor

@wmhst7 wmhst7 commented Oct 14, 2025

See #1806 (comment)
This PR prevents mutation of ReordererSequenceParallel. Because torch.utils.checkpoint.checkpoint is a HOP and has such constraint.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 14, 2025
local_rank = device_mesh.get_local_rank()
# fact: top_scores.shape[0] // self.top_k = batch_size * seq_len // ep_degree
token_indices_experts_sorted += top_scores.shape[0] // self.top_k * local_rank
token_indices_experts_sorted += top_scores.shape[0] // mod.top_k * local_rank
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a raise ValueError if mod doesn't have top_k attribute, with meaningful error message?

wmhst7 and others added 2 commits October 14, 2025 15:45
Co-authored-by: tianyu-l <150487191+tianyu-l@users.noreply.github.com>
Copy link
Contributor

@tianyu-l tianyu-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Please fix lint.

@tianyu-l tianyu-l merged commit 4faff04 into pytorch:main Oct 15, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ReordererSequenceParallel (ETP=1) doesn't work with torch.compile + AC

2 participants