Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix issues with convert_nemo_llama_to_hf.py #7922

Merged
merged 2 commits into from
Nov 25, 2023
Merged

Conversation

Zhilin123
Copy link
Collaborator

What does this PR do ?

fix issues with convert_nemo_llama_to_hf.py

Collection: nlp

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

Copy link
Collaborator

@yidong72 yidong72 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, check with others to make sure it doesn't break anything.

@janekl
Copy link
Collaborator

janekl commented Nov 24, 2023

Do we have a test for convert_nemo_llama_to_hf.py anywhere?

Like we have, for example, for the script convert_hf_llama_to_nemo.py working the other way around here.

@Zhilin123
Copy link
Collaborator Author

Zhilin123 commented Nov 24, 2023

@janekl I'm not sure if there's such a test since I'm not the original author of convert_nemo_llama_to_hf.py. The changes here are made based on trying to load the files into hf (which didn't work previously if you use the pytorch_model.bin directly, since all model weights are contained within ['state_dict'] which the corresponding pytorch_model.bin from HF do not, and instead expose the keys directly). The only other change here is ensuring model_config.tensor_model_parallel_size = 1, which makes the conversion fail otherwise when running with --cpu-only Maybe Utkarsh Uppal, the author of #7770 might know.

@Zhilin123
Copy link
Collaborator Author

jenkins

@Zhilin123 Zhilin123 merged commit 79bc929 into main Nov 25, 2023
15 checks passed
@Zhilin123 Zhilin123 deleted the convert_llama_to_hf_fixes branch November 25, 2023 20:27
@Zhilin123 Zhilin123 restored the convert_llama_to_hf_fixes branch November 25, 2023 20:27
erhoo82 pushed a commit to erhoo82/NeMo that referenced this pull request Dec 2, 2023
Signed-off-by: Chen Cui <chcui@nvidia.com>

support packed dataset

Signed-off-by: Chen Cui <chcui@nvidia.com>

[Codec] Finite scalar quantizer (NVIDIA#7886)

* Finite scalar quantizer

Signed-off-by: Ante Jukić <ajukic@nvidia.com>

* Updated test

Signed-off-by: Ante Jukić <ajukic@nvidia.com>

---------

Signed-off-by: Ante Jukić <ajukic@nvidia.com>

upgrade to latest mcore and TE (NVIDIA#7908)

* reimport module

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update mcore and TE commits

Signed-off-by: dimapihtar <dpihtar@gmail.com>

---------

Signed-off-by: dimapihtar <dpihtar@gmail.com>

Tar codec (NVIDIA#7867)

added missing torch import (NVIDIA#7913)

Signed-off-by: David Mosallanezhad <dmosallanezh@nvidia.com>

add cpu init check (NVIDIA#7889)

Signed-off-by: Chen Cui <chcui@nvidia.com>

Fix pinned triton version (NVIDIA#7925)

* Fix pinned triton version

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* Remove comment

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change README

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* Remove flash-attn in Dockerfile

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* Revert

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

---------

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

fix tp_overlap config var name (NVIDIA#7928)

Signed-off-by: Xiaowei Ren <xren@nvidia.com>

add Dutch P&C FC model info (NVIDIA#7892)

* add Dutch P&C FC model info

Signed-off-by: zhehuaichen <dian.chenzhehuai@gmail.com>

* update order of the results

Signed-off-by: zhehuaichen <dian.chenzhehuai@gmail.com>

---------

Signed-off-by: zhehuaichen <dian.chenzhehuai@gmail.com>

fix issues with convert_nemo_llama_to_hf.py (NVIDIA#7922)

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

fix collate_fn bug for TP > 1

Signed-off-by: Chen Cui <chcui@nvidia.com>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

make packed dataset work

Signed-off-by: Chen Cui <chcui@nvidia.com>

fix nan bug

Signed-off-by: Chen Cui <chcui@nvidia.com>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

support answer only loss

Signed-off-by: Chen Cui <chcui@nvidia.com>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

account for padding in cu_seqlens during dataloading for attn kernel

Signed-off-by: Chen Cui <chcui@nvidia.com>

fix path for answer_only_loss = false

Signed-off-by: Chen Cui <chcui@nvidia.com>

Modify GPTSFTPackedDataset to respond to pad_to_max_length setting

Signed-off-by: Valerie Sarge <vsarge@nvidia.com>
erhoo82 pushed a commit to erhoo82/NeMo that referenced this pull request Dec 2, 2023
Signed-off-by: Chen Cui <chcui@nvidia.com>

support packed dataset

Signed-off-by: Chen Cui <chcui@nvidia.com>

[Codec] Finite scalar quantizer (NVIDIA#7886)

* Finite scalar quantizer

Signed-off-by: Ante Jukić <ajukic@nvidia.com>

* Updated test

Signed-off-by: Ante Jukić <ajukic@nvidia.com>

---------

Signed-off-by: Ante Jukić <ajukic@nvidia.com>

upgrade to latest mcore and TE (NVIDIA#7908)

* reimport module

Signed-off-by: dimapihtar <dpihtar@gmail.com>

* update mcore and TE commits

Signed-off-by: dimapihtar <dpihtar@gmail.com>

---------

Signed-off-by: dimapihtar <dpihtar@gmail.com>

Tar codec (NVIDIA#7867)

added missing torch import (NVIDIA#7913)

Signed-off-by: David Mosallanezhad <dmosallanezh@nvidia.com>

add cpu init check (NVIDIA#7889)

Signed-off-by: Chen Cui <chcui@nvidia.com>

Fix pinned triton version (NVIDIA#7925)

* Fix pinned triton version

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* Remove comment

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change README

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* Remove flash-attn in Dockerfile

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

* Revert

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>

---------

Signed-off-by: Cheng-Ping Hsieh <chsieh@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

fix tp_overlap config var name (NVIDIA#7928)

Signed-off-by: Xiaowei Ren <xren@nvidia.com>

add Dutch P&C FC model info (NVIDIA#7892)

* add Dutch P&C FC model info

Signed-off-by: zhehuaichen <dian.chenzhehuai@gmail.com>

* update order of the results

Signed-off-by: zhehuaichen <dian.chenzhehuai@gmail.com>

---------

Signed-off-by: zhehuaichen <dian.chenzhehuai@gmail.com>

fix issues with convert_nemo_llama_to_hf.py (NVIDIA#7922)

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

fix collate_fn bug for TP > 1

Signed-off-by: Chen Cui <chcui@nvidia.com>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

make packed dataset work

Signed-off-by: Chen Cui <chcui@nvidia.com>

fix nan bug

Signed-off-by: Chen Cui <chcui@nvidia.com>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

support answer only loss

Signed-off-by: Chen Cui <chcui@nvidia.com>

[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

account for padding in cu_seqlens during dataloading for attn kernel

Signed-off-by: Chen Cui <chcui@nvidia.com>

fix path for answer_only_loss = false

Signed-off-by: Chen Cui <chcui@nvidia.com>
pzelasko pushed a commit to pzelasko/NeMo that referenced this pull request Jan 3, 2024
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
rohitrango pushed a commit to rohitrango/NeMo that referenced this pull request Jun 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants