Skip to content

[float8 moe training] FSDP support #2413

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Conversation

danielvegamyhre
Copy link
Contributor

@danielvegamyhre danielvegamyhre commented Jun 19, 2025

Summary

  • Remove use_triton_for_per_group_scales flag, as it was making the code too complicated/ugly for not enough value - I can bench torch vs triton impls by manually changing 2 lines, no need for this config flag.
  • Update the ScaledGroupedMMTensor subclass to support FSDP ops while preserving the subclass.
  • Since the subclass wraps the nn.Parameter data tensor, it's subclass must be preserved through FSDP related ops (otherwise, if the ops produce regular tensors, the grouped_mm => differentiable scaled grouped mm op override won't happen.

Design

  • Keep "grouped_mm => differentiable scaled grouped mm" override in torch_function to capture graph for autograd
  • For all other ops, handle in torch_dispatch to avoid autograd
  • For ops related to FSDP, preserve subclass. Otherwise, returning a regular tensor is fine.

Test plan

Integration tests

  • Added FSDP integration test in test/prototype/moe_training/test_fsdp.py

Manual tests

  • Tested with llama4 on 2 H100s w/ FSDP=2, using torchtitan PoC integration, added temporary logging to confirm the op override differentiable scaled grouped mm is indeed being called in the MoE.
  • Command: CUDA_VISIBLE_DEVICES="6,7" NGPU=2 CONFIG_FILE="./torchtitan/experiments/llama4/train_configs/debug_model.toml" ./run_train.sh --training.steps=100 --model.converters="float8" --float8.moe_fqns_prototype="experts" 2>&1 | tee ~/ao/fsdp-log.txt

@danielvegamyhre danielvegamyhre added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Jun 19, 2025
Copy link

pytorch-bot bot commented Jun 19, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2413

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 57d600e with merge base 809af2e (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 19, 2025
@danielvegamyhre danielvegamyhre marked this pull request as draft June 19, 2025 18:38
@danielvegamyhre danielvegamyhre marked this pull request as ready for review June 20, 2025 00:07
@danielvegamyhre danielvegamyhre force-pushed the new-fsdp-moe branch 2 times, most recently from 2a1b16b to 4172b89 Compare June 20, 2025 00:22
@danielvegamyhre danielvegamyhre marked this pull request as draft June 20, 2025 15:16
@danielvegamyhre danielvegamyhre marked this pull request as ready for review June 20, 2025 15:55
@danielvegamyhre danielvegamyhre requested review from drisspg and vkuzo June 20, 2025 15:55
@danielvegamyhre
Copy link
Contributor Author

fyi @tianyu-l @ngimel as well for awareness on progress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants