Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyena Operator #9264

Merged
merged 33 commits into from
Jun 13, 2024
Merged

Hyena Operator #9264

merged 33 commits into from
Jun 13, 2024

Conversation

guyjacob
Copy link
Contributor

What does this PR do ?

Add Hyena operator (https://arxiv.org/abs/2302.10866)

Collection: NLP

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

guyjacob added 28 commits May 16, 2024 18:32
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
* Add missing input LayerNorm to spec (in the default attention
  spec it's fused with the projection Linear layer, so not
  explicitly defined)
* Shape conversion at start and end of Hyena forward

Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
See: HazyResearch/safari#26 (comment)
Signed-off-by: Guy Jacob <guyj@nvidia.com>
(torch.fft doesn't support bf16)

Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
(made redundant by the default inmplementation in Megatron)

Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
* Refactor in spirit of implementation in MAD-Lab repo:
  https://github.com/athms/mad-lab/blob/main/mad/model/layers/hyena.py

Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
@github-actions github-actions bot added the NLP label May 21, 2024
Signed-off-by: Guy Jacob <guyj@nvidia.com>
akoumpa
akoumpa previously approved these changes May 28, 2024
Copy link
Member

@akoumpa akoumpa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just a few minor comments, if you have to address these that would be great, thanks!

* Clearer names + more documentation for config params
* Clearer README
* Check seq len < 8K with safari-fftconv
* Avoid 0*bias op during forward

Signed-off-by: Guy Jacob <guyj@nvidia.com>
akoumpa
akoumpa previously approved these changes Jun 9, 2024
from nemo.collections.nlp.parts.utils_funcs import torch_dtype_from_precision

try:
import fftconv

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'fftconv' is not used.
HAVE_FFTCONV = False

try:
import flashfftconv

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'flashfftconv' is not used.
HAVE_FLASHFFTCONV = False

try:
import causal_conv1d

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'causal_conv1d' is not used.
guyjacob added 2 commits June 10, 2024 15:08
Signed-off-by: Guy Jacob <guyj@nvidia.com>
Signed-off-by: Guy Jacob <guyj@nvidia.com>
@ericharper ericharper merged commit a6a0aee into NVIDIA:main Jun 13, 2024
109 checks passed
JesusPaz pushed a commit to JesusPaz/NeMo that referenced this pull request Jun 18, 2024
* Initial reference code commit, unchanged

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Hyena code changes for NeMO compatibility

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* MCore spec override functionality + example config w. hyena

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Additional changes - now working on char-level TinyShakespeare

* Add missing input LayerNorm to spec (in the default attention
  spec it's fused with the projection Linear layer, so not
  explicitly defined)
* Shape conversion at start and end of Hyena forward

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Add fftconv cuda impl from safari

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Workaround for shape error in fftconv

See: HazyResearch/safari#26 (comment)
Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Explicitly convert kernel to FP32

(torch.fft doesn't support bf16)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Working run configs

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove sharded_state_dict from HyenaOperator

(made redundant by the default inmplementation in Megatron)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Update configs

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Testing TE Linear classes in HyenaOperator

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Revert to FusedDense for in/out projections after merging with 24.01.01

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Fix bug (use fused LNorm+Linear), bring back TE layers

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Configs rename + cleanup

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* FlashFFTConv, Multi-head, some cleanup

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Bug fix - init FlashFFTConv with 2*seq_len

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* ModuleSpec + replace nn.Conv1d with causal_conv1d

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove unneeded arguments

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* More cleanup, remove fftconv ref functions

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Refactor HyenaFilter + more cleanup

* Refactor in spirit of implementation in MAD-Lab repo:
  https://github.com/athms/mad-lab/blob/main/mad/model/layers/hyena.py

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Add missing attributions

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove fftconv sources

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Bug fixes

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove d_model from external API, take from TransformerConfig

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* cleanup config

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove spec override logic (possibly push separately)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Add tests

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Keep only megatron_gpt_config_hyena (w. 153m parameters)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Black + isort formatting changes

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Fixes following PR review

* Clearer names + more documentation for config params
* Clearer README
* Check seq len < 8K with safari-fftconv
* Avoid 0*bias op during forward

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Fix tests following param name changes

Signed-off-by: Guy Jacob <guyj@nvidia.com>

---------

Signed-off-by: Guy Jacob <guyj@nvidia.com>
rohitrango pushed a commit to rohitrango/NeMo that referenced this pull request Jun 25, 2024
* Initial reference code commit, unchanged

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Hyena code changes for NeMO compatibility

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* MCore spec override functionality + example config w. hyena

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Additional changes - now working on char-level TinyShakespeare

* Add missing input LayerNorm to spec (in the default attention
  spec it's fused with the projection Linear layer, so not
  explicitly defined)
* Shape conversion at start and end of Hyena forward

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Add fftconv cuda impl from safari

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Workaround for shape error in fftconv

See: HazyResearch/safari#26 (comment)
Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Explicitly convert kernel to FP32

(torch.fft doesn't support bf16)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Working run configs

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove sharded_state_dict from HyenaOperator

(made redundant by the default inmplementation in Megatron)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Update configs

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Testing TE Linear classes in HyenaOperator

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Revert to FusedDense for in/out projections after merging with 24.01.01

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Fix bug (use fused LNorm+Linear), bring back TE layers

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Configs rename + cleanup

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* FlashFFTConv, Multi-head, some cleanup

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Bug fix - init FlashFFTConv with 2*seq_len

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* ModuleSpec + replace nn.Conv1d with causal_conv1d

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove unneeded arguments

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* More cleanup, remove fftconv ref functions

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Refactor HyenaFilter + more cleanup

* Refactor in spirit of implementation in MAD-Lab repo:
  https://github.com/athms/mad-lab/blob/main/mad/model/layers/hyena.py

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Add missing attributions

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove fftconv sources

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Bug fixes

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove d_model from external API, take from TransformerConfig

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* cleanup config

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Remove spec override logic (possibly push separately)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Add tests

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Keep only megatron_gpt_config_hyena (w. 153m parameters)

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Black + isort formatting changes

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Fixes following PR review

* Clearer names + more documentation for config params
* Clearer README
* Check seq len < 8K with safari-fftconv
* Avoid 0*bias op during forward

Signed-off-by: Guy Jacob <guyj@nvidia.com>

* Fix tests following param name changes

Signed-off-by: Guy Jacob <guyj@nvidia.com>

---------

Signed-off-by: Guy Jacob <guyj@nvidia.com>
@ko3n1g ko3n1g mentioned this pull request Jul 18, 2024
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants