-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hyena Operator #9264
Hyena Operator #9264
Conversation
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
* Add missing input LayerNorm to spec (in the default attention spec it's fused with the projection Linear layer, so not explicitly defined) * Shape conversion at start and end of Hyena forward Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
See: HazyResearch/safari#26 (comment) Signed-off-by: Guy Jacob <guyj@nvidia.com>
(torch.fft doesn't support bf16) Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
(made redundant by the default inmplementation in Megatron) Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
* Refactor in spirit of implementation in MAD-Lab repo: https://github.com/athms/mad-lab/blob/main/mad/model/layers/hyena.py Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
examples/nlp/language_modeling/conf/megatron_gpt_config_hyena.yaml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just a few minor comments, if you have to address these that would be great, thanks!
* Clearer names + more documentation for config params * Clearer README * Check seq len < 8K with safari-fftconv * Avoid 0*bias op during forward Signed-off-by: Guy Jacob <guyj@nvidia.com>
from nemo.collections.nlp.parts.utils_funcs import torch_dtype_from_precision | ||
|
||
try: | ||
import fftconv |
Check notice
Code scanning / CodeQL
Unused import Note test
HAVE_FFTCONV = False | ||
|
||
try: | ||
import flashfftconv |
Check notice
Code scanning / CodeQL
Unused import Note test
HAVE_FLASHFFTCONV = False | ||
|
||
try: | ||
import causal_conv1d |
Check notice
Code scanning / CodeQL
Unused import Note test
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
* Initial reference code commit, unchanged Signed-off-by: Guy Jacob <guyj@nvidia.com> * Hyena code changes for NeMO compatibility Signed-off-by: Guy Jacob <guyj@nvidia.com> * MCore spec override functionality + example config w. hyena Signed-off-by: Guy Jacob <guyj@nvidia.com> * Additional changes - now working on char-level TinyShakespeare * Add missing input LayerNorm to spec (in the default attention spec it's fused with the projection Linear layer, so not explicitly defined) * Shape conversion at start and end of Hyena forward Signed-off-by: Guy Jacob <guyj@nvidia.com> * Add fftconv cuda impl from safari Signed-off-by: Guy Jacob <guyj@nvidia.com> * Workaround for shape error in fftconv See: HazyResearch/safari#26 (comment) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Explicitly convert kernel to FP32 (torch.fft doesn't support bf16) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Working run configs Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove sharded_state_dict from HyenaOperator (made redundant by the default inmplementation in Megatron) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Update configs Signed-off-by: Guy Jacob <guyj@nvidia.com> * Testing TE Linear classes in HyenaOperator Signed-off-by: Guy Jacob <guyj@nvidia.com> * Revert to FusedDense for in/out projections after merging with 24.01.01 Signed-off-by: Guy Jacob <guyj@nvidia.com> * Fix bug (use fused LNorm+Linear), bring back TE layers Signed-off-by: Guy Jacob <guyj@nvidia.com> * Configs rename + cleanup Signed-off-by: Guy Jacob <guyj@nvidia.com> * FlashFFTConv, Multi-head, some cleanup Signed-off-by: Guy Jacob <guyj@nvidia.com> * Bug fix - init FlashFFTConv with 2*seq_len Signed-off-by: Guy Jacob <guyj@nvidia.com> * ModuleSpec + replace nn.Conv1d with causal_conv1d Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove unneeded arguments Signed-off-by: Guy Jacob <guyj@nvidia.com> * More cleanup, remove fftconv ref functions Signed-off-by: Guy Jacob <guyj@nvidia.com> * Refactor HyenaFilter + more cleanup * Refactor in spirit of implementation in MAD-Lab repo: https://github.com/athms/mad-lab/blob/main/mad/model/layers/hyena.py Signed-off-by: Guy Jacob <guyj@nvidia.com> * Add missing attributions Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove fftconv sources Signed-off-by: Guy Jacob <guyj@nvidia.com> * Bug fixes Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove d_model from external API, take from TransformerConfig Signed-off-by: Guy Jacob <guyj@nvidia.com> * cleanup config Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove spec override logic (possibly push separately) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Add tests Signed-off-by: Guy Jacob <guyj@nvidia.com> * Keep only megatron_gpt_config_hyena (w. 153m parameters) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Black + isort formatting changes Signed-off-by: Guy Jacob <guyj@nvidia.com> * Fixes following PR review * Clearer names + more documentation for config params * Clearer README * Check seq len < 8K with safari-fftconv * Avoid 0*bias op during forward Signed-off-by: Guy Jacob <guyj@nvidia.com> * Fix tests following param name changes Signed-off-by: Guy Jacob <guyj@nvidia.com> --------- Signed-off-by: Guy Jacob <guyj@nvidia.com>
* Initial reference code commit, unchanged Signed-off-by: Guy Jacob <guyj@nvidia.com> * Hyena code changes for NeMO compatibility Signed-off-by: Guy Jacob <guyj@nvidia.com> * MCore spec override functionality + example config w. hyena Signed-off-by: Guy Jacob <guyj@nvidia.com> * Additional changes - now working on char-level TinyShakespeare * Add missing input LayerNorm to spec (in the default attention spec it's fused with the projection Linear layer, so not explicitly defined) * Shape conversion at start and end of Hyena forward Signed-off-by: Guy Jacob <guyj@nvidia.com> * Add fftconv cuda impl from safari Signed-off-by: Guy Jacob <guyj@nvidia.com> * Workaround for shape error in fftconv See: HazyResearch/safari#26 (comment) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Explicitly convert kernel to FP32 (torch.fft doesn't support bf16) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Working run configs Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove sharded_state_dict from HyenaOperator (made redundant by the default inmplementation in Megatron) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Update configs Signed-off-by: Guy Jacob <guyj@nvidia.com> * Testing TE Linear classes in HyenaOperator Signed-off-by: Guy Jacob <guyj@nvidia.com> * Revert to FusedDense for in/out projections after merging with 24.01.01 Signed-off-by: Guy Jacob <guyj@nvidia.com> * Fix bug (use fused LNorm+Linear), bring back TE layers Signed-off-by: Guy Jacob <guyj@nvidia.com> * Configs rename + cleanup Signed-off-by: Guy Jacob <guyj@nvidia.com> * FlashFFTConv, Multi-head, some cleanup Signed-off-by: Guy Jacob <guyj@nvidia.com> * Bug fix - init FlashFFTConv with 2*seq_len Signed-off-by: Guy Jacob <guyj@nvidia.com> * ModuleSpec + replace nn.Conv1d with causal_conv1d Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove unneeded arguments Signed-off-by: Guy Jacob <guyj@nvidia.com> * More cleanup, remove fftconv ref functions Signed-off-by: Guy Jacob <guyj@nvidia.com> * Refactor HyenaFilter + more cleanup * Refactor in spirit of implementation in MAD-Lab repo: https://github.com/athms/mad-lab/blob/main/mad/model/layers/hyena.py Signed-off-by: Guy Jacob <guyj@nvidia.com> * Add missing attributions Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove fftconv sources Signed-off-by: Guy Jacob <guyj@nvidia.com> * Bug fixes Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove d_model from external API, take from TransformerConfig Signed-off-by: Guy Jacob <guyj@nvidia.com> * cleanup config Signed-off-by: Guy Jacob <guyj@nvidia.com> * Remove spec override logic (possibly push separately) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Add tests Signed-off-by: Guy Jacob <guyj@nvidia.com> * Keep only megatron_gpt_config_hyena (w. 153m parameters) Signed-off-by: Guy Jacob <guyj@nvidia.com> * Black + isort formatting changes Signed-off-by: Guy Jacob <guyj@nvidia.com> * Fixes following PR review * Clearer names + more documentation for config params * Clearer README * Check seq len < 8K with safari-fftconv * Avoid 0*bias op during forward Signed-off-by: Guy Jacob <guyj@nvidia.com> * Fix tests following param name changes Signed-off-by: Guy Jacob <guyj@nvidia.com> --------- Signed-off-by: Guy Jacob <guyj@nvidia.com>
What does this PR do ?
Add Hyena operator (https://arxiv.org/abs/2302.10866)
Collection: NLP
Changelog
Usage
# Add a code snippet demonstrating how to use this
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information