-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor-parallel communication overlap with userbuffer backend #6444
Tensor-parallel communication overlap with userbuffer backend #6444
Conversation
* add interfaces for tp_communication overlap * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Interface to provide custom userbuffer communicator settings by yaml file * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Jenkinsfile Signed-off-by: Sangkug Lym <slym@nvidia.com> --------- Signed-off-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Eric Harper <complex451@gmail.com>
This PR is stale because it has been open for 14 days with no activity. Remove stale label or comment or update or this will be closed in 7 days. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may require us to port the changes from NVIDIA/apex#1626 and maybe NVIDIA/apex#1620 to Megatron-LM. That said, I don't see any megatron.core
-related changes in this PR.
Signed-off-by: Tim Moon <tmoon@nvidia.com>
I've modified this PR to construct an MPI process group within NeMo, avoiding the need to port NVIDIA/apex#1626 to Megatron-LM. This would check off one of the Megatron-core bugs listed in #6625. This PR is probably dependent on #6627, which restores FP8 support. |
Signed-off-by: arendu <adithya.r@gmail.com>
* [TTS] Add callback for saving audio during FastPitch training Signed-off-by: Ryan <rlangman@nvidia.com> * [TTS] Allow NGC model name for vocoder Signed-off-by: Ryan <rlangman@nvidia.com> --------- Signed-off-by: Ryan <rlangman@nvidia.com>
* update batch size recommendation to min 32 for 43b Signed-off-by: Zhilin Wang <wangzhilin12061996@hotmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Zhilin Wang <wangzhilin12061996@hotmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Inconsistent usage of the word Note, which includes a broken reading in one case. I'm just doing some tidying -- not trying to be critical. Signed-off-by: Brian McBrayer <BrianMcBrayer@users.noreply.github.com>
Signed-off-by: Jocelyn Huang <jocelynh@nvidia.com>
* adding ssl config for fast-conformer adding boolean flags for ssl losses Signed-off-by: Krishna Puvvada <kpuvvada@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * renaming fast-conformer to fastconformer in config folder Signed-off-by: Krishna Puvvada <kpuvvada@nvidia.com> --------- Signed-off-by: Krishna Puvvada <kpuvvada@nvidia.com> Co-authored-by: Krishna Puvvada <kpuvvada@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The tree is invalid as this points to a blob, and the links would not open in colab. Signed-off-by: Brian McBrayer <BrianMcBrayer@users.noreply.github.com> Co-authored-by: Brian McBrayer <brian@acceleratepath.com>
Signed-off-by: fayejf <36722593+fayejf@users.noreply.github.com>
* add GPT FP8 ONNX export support Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * changes 1. Add dynamic axes for inputs 2. Update model input_example to resolve size error by TE Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * Conform to Python style guidelines Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * refactor to avoid typecasting bf16 string Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * fix attribute error in export_utils Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * set constant_folding to False by default Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * refactor exportable wrapper into model class definition Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * remove conditional replacement of modules Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * set fp8_recipe to None by default Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * address all comments Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * typecast precision check for fp16 Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> * rename export script Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> --------- Signed-off-by: Asfiya Baig <asfiyab@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Boris Fomitchev <borisfom@users.noreply.github.com>
* [TTS] Add script for text preprocessing Signed-off-by: Ryan <rlangman@nvidia.com> * [TTS] Use Normalizer.input_case Signed-off-by: Ryan <rlangman@nvidia.com> --------- Signed-off-by: Ryan <rlangman@nvidia.com>
* allows usage of pre-extracted base model Signed-off-by: arendu <adithya.r@gmail.com> * extracted model checking and loading Signed-off-by: arendu <adithya.r@gmail.com> * style Signed-off-by: arendu <adithya.r@gmail.com> * style Signed-off-by: arendu <adithya.r@gmail.com> * update Signed-off-by: arendu <adithya.r@gmail.com> * removed sft eval script, can use peft eval script for sft models Signed-off-by: arendu <adithya.r@gmail.com> --------- Signed-off-by: arendu <adithya.r@gmail.com>
Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: Yi Dong <yidong@nvidia.com>
@ericharper What are the remaining issues of this PR? |
Conflicts should be resolved and it needs to pass CI |
* preprocess squad in sft format Signed-off-by: arendu <adithya.r@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: arendu <adithya.r@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
* [Temp] VP Fixes Signed-off-by: smajumdar <titu1994@gmail.com> * Revert logging Signed-off-by: smajumdar <titu1994@gmail.com> --------- Signed-off-by: smajumdar <titu1994@gmail.com>
* add GraphTransducerLossBase abstract class with the interface for Graph-based loses * add RNN-T implementation in GraphRnntLoss with tests * add W-Transducer implementation in GraphWTransducerLoss with tests * add GraphRnntLoss + GraphWTransducerLoss to RNN-T loss resolver --------- Signed-off-by: Vladimir Bataev <vbataev@nvidia.com>
* fix test fastpitch nightly Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu> * Reformat Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix if elif condition Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu> --------- Signed-off-by: hsiehjackson <c2hsieh@ucsd.edu> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Igor Gitman <igitman@nvidia.com>
…3835f79fc2c20 Signed-off-by: Eric Harper <complex451@gmail.com>
Co-authored-by: Tim Moon <4406448+timmoon10@users.noreply.github.com> Signed-off-by: Eric Harper <complex451@gmail.com>
Signed-off-by: ericharper <complex451@gmail.com>
Signed-off-by: ericharper <complex451@gmail.com>
Closing in favor of cherry picking the changes. |
What does this PR do ?
Add (1) interfaces to TE and initialized (2) process group setting to support tensor-parallel communication overlap with userbuffer backend.
Changelog
Usage
Set
ub_tp_comm_overlap
toTrue
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information