-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add huggingface llava #524
base: main
Are you sure you want to change the base?
Conversation
…ommented code and simplifying model type checks
…_kernel_to_llava function
…r bfloat16 support
… outdated configurations
…_to_llava for model type handling
…iguration parameters
make test-convergence log
|
Good work! I'll take a look in few days. |
There are multiple breaking changes in transformers recently. Convergence test couldn't pass with newer transformers version.
Environment:
Let's make a condition to handle different function signatures for older and newer transformers version, since both are still being used by users. something like
|
@Tcc0403
|
That's why if you look at the hw&sw specification, my transformer is 4.49.0.dev instead of 4.48.0 due to this issue. |
Yes, see #571. Push the |
Hey @Tcc0403, is the test problem solved now? |
I found why ruff doesn't format files correctly. Uninstall ruff outside of your virtual environtment, so that ruff in venv can fetch the correct configuration. Overall the code LGTM. All tests passed. We can merge it after formatting. minor: address the issue mentioned in #573, matching the latest transformers APIs ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Format files to match main branch
@Tcc0403, all test is pass. could you please review this? test convergence logHF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models.py
======================================================================================================================= test session starts =======================================================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
collecting ...
----------------------------------------------------------------------------------------------------------------------- live log collection -----------------------------------------------------------------------------------------------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 13 items
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_llama3-32-0.0001-dtype0-1e-08-2e-05-0.0001-1e-05-0.005-1e-05] PASSED [ 7%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_llava-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 15%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 23%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 30%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype4-1e-05-0.1-0.005-1e-05-0.005-1e-05] PASSED [ 38%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype5-1e-05-0.1-0.005-1e-05-0.005-1e-05] PASSED [ 46%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_olmo2-32-0.0001-dtype6-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 53%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_phi3-32-0.0001-dtype7-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 61%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_mistral-32-0.0001-dtype8-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 69%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 76%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 84%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma2-32-0.0001-dtype11-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 92%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_granite3-32-0.0001-dtype12-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [100%]
============================================================================================================ 13 passed, 1 warning in 115.47s (0:01:55) ============================================================================================================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models_multimodal.py
======================================================================================================================= test session starts =======================================================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
collecting ...
----------------------------------------------------------------------------------------------------------------------- live log collection -----------------------------------------------------------------------------------------------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 4 items
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_vl-32-0.0001-dtype0-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 25%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_5_vl-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 50%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 75%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_llava-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [100%]
============================================================================================================ 4 passed, 7 warnings in 81.11s (0:01:21) =============================================================================================================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models_with_logits.py
======================================================================================================================= test session starts =======================================================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
collecting ...
----------------------------------------------------------------------------------------------------------------------- live log collection -----------------------------------------------------------------------------------------------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 13 items
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_llama3-32-0.0001-dtype0-1e-08-2e-05-0.0001-1e-05-0.005-1e-05] PASSED [ 7%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_llava-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 15%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 23%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 30%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype4-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 38%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype5-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 46%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_olmo2-32-0.0001-dtype6-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 53%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_phi3-32-0.0001-dtype7-1e-08-1e-05-0.005-1e-05-0.005-1e-05] make test-convergencePASSED [ 61%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_mistral-32-0.0001-dtype8-1e-08-1e-05-0.005-1e-05-0.005-1e-05PASSED [ 69%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 76%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 84%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma2-32-0.0001-dtype11-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 92%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_granite3-32-0.0001-dtype12-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [100%]
============================================================================================================ 13 passed, 1 warning in 112.86s (0:01:52) ============================================================================================================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models.py
======================================================================================================================= test session starts =======================================================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
collecting ...
----------------------------------------------------------------------------------------------------------------------- live log collection -----------------------------------------------------------------------------------------------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 12 items
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_llama3-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 8%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_llava-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 16%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_granite3-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 25%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_mllama-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 33%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2-32-0.0001-dtype4-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 41%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype5-0.001-0.05-0.1-0.01-0.01-0.01] PASSED [ 50%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype6-0.001-0.05-0.1-0.01-0.01-0.01] PASSED [ 58%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_phi3-32-0.0001-dtype7-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 66%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_mistral-32-0.0001-dtype8-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_olmo2-32-0.0001-dtype9-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 83%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_gemma1-32-0.0001-dtype10-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 91%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype11-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [100%]
============================================================================================================ 12 passed, 1 warning in 67.10s (0:01:07) =============================================================================================================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models_multimodal.py
======================================================================================================================= test session starts =======================================================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
collecting ...
----------------------------------------------------------------------------------------------------------------------- live log collection -----------------------------------------------------------------------------------------------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 4 items
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_vl-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 25%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_5_vl-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 50%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_mllama-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_llava-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [100%]
============================================================================================================ 4 passed, 7 warnings in 60.99s (0:01:00) =============================================================================================================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models_with_logits.py
======================================================================================================================= test session starts =======================================================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
collecting ...
----------------------------------------------------------------------------------------------------------------------- live log collection -----------------------------------------------------------------------------------------------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 12 items
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_llama3-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 8%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_llava-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 16%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_granite3-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 25%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_mllama-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 33%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2-32-0.0001-dtype4-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 41%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype5-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 50%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype6-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 58%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_phi3-32-0.0001-dtype7-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 66%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_mistral-32-0.0001-dtype8-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 83%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 91%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_olmo2-32-0.0001-dtype11-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [100%]
============================================================================================================ 12 passed, 1 warning in 70.05s (0:01:10) ============================================================================================================= |
@@ -496,7 +507,7 @@ | |||
tie_word_embeddings=False, | |||
use_cache=True, | |||
vocab_size=32000, # 128256, | |||
logits_scaling=8.0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it fail the test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, there was nothing that failed regarding mini_granite3
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even after changing the values and running it again, it still passed the test.
convergence-test
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
----------------------------- live log collection ------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 13 items
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_llama3-32-0.0001-dtype0-1e-08-2e-05-0.0001-1e-05-0.005-1e-05] PASSED [ 7%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_llava-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 15%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 23%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 30%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype4-1e-05-0.1-0.005-1e-05-0.005-1e-05] PASSED [ 38%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype5-1e-05-0.1-0.005-1e-05-0.005-1e-05] PASSED [ 46%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_olmo2-32-0.0001-dtype6-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 53%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_phi3-32-0.0001-dtype7-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 61%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_mistral-32-0.0001-dtype8-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 69%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 76%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 84%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma2-32-0.0001-dtype11-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 92%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_granite3-32-0.0001-dtype12-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [100%]
================== 13 passed, 1 warning in 112.41s (0:01:52) ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models_multimodal.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
----------------------------- live log collection ------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 4 items
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_vl-32-0.0001-dtype0-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 25%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_5_vl-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 50%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 75%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_llava-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [100%]
=================== 4 passed, 5 warnings in 81.22s (0:01:21) ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models_with_logits.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
----------------------------- live log collection ------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 13 items
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_llama3-32-0.0001-dtype0-1e-08-2e-05-0.0001-1e-05-0.005-1e-05] PASSED [ 7%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_llava-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 15%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 23%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 30%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype4-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 38%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype5-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 46%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_olmo2-32-0.0001-dtype6-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 53%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_phi3-32-0.0001-dtype7-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 61%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_mistral-32-0.0001-dtype8-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 69%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 76%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 84%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma2-32-0.0001-dtype11-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 92%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_granite3-32-0.0001-dtype12-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [100%]
================== 13 passed, 1 warning in 112.43s (0:01:52) ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
----------------------------- live log collection ------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 12 items
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_llama3-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 8%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_llava-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 16%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_granite3-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 25%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_mllama-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 33%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2-32-0.0001-dtype4-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 41%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype5-0.001-0.05-0.1-0.01-0.01-0.01] PASSED [ 50%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype6-0.001-0.05-0.1-0.01-0.01-0.01] PASSED [ 58%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_phi3-32-0.0001-dtype7-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 66%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_mistral-32-0.0001-dtype8-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_olmo2-32-0.0001-dtype9-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 83%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_gemma1-32-0.0001-dtype10-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 91%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype11-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [100%]
=================== 12 passed, 1 warning in 67.35s (0:01:07) ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models_multimodal.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
----------------------------- live log collection ------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 4 items
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_vl-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 25%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_5_vl-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 50%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_mllama-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_llava-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [100%]
=================== 4 passed, 5 warnings in 61.02s (0:01:01) ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models_with_logits.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/jp/Liger-Kernel
configfile: pyproject.toml
plugins: jaxtyping-0.2.36, anyio-4.6.2.post1
----------------------------- live log collection ------------------------------
INFO numexpr.utils:utils.py:148 Note: NumExpr detected 48 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
INFO numexpr.utils:utils.py:161 NumExpr defaulting to 16 threads.
INFO datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 12 items
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_llama3-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 8%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_llava-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 16%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_granite3-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 25%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_mllama-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 33%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2-32-0.0001-dtype4-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 41%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype5-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 50%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype6-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 58%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_phi3-32-0.0001-dtype7-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 66%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_mistral-32-0.0001-dtype8-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 83%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 91%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_olmo2-32-0.0001-dtype11-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [100%]
=================== 12 passed, 1 warning in 66.39s (0:01:06) ===================
|
||
|
||
@add_start_docstrings_to_model_forward(LLAVA_INPUTS_DOCSTRING) | ||
@deprecate_kwarg("num_logits_to_keep", new_name="logits_to_keep", version="4.50") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know, deprecate_kwarg
decorator should be on the latest function instead of the deprecated one, since it is meant to replace num_logits_to_keep
with logits_to_keep
to match the latest function signature.
@@ -407,7 +515,6 @@ def run_mini_model_multimodal( | |||
print(f"Step {i}, Loss: {output.loss.item()}") | |||
loss_list.append(output.loss.item()) | |||
|
|||
MINI_MODEL_SETUPS[model_name].liger_kernel_patch_revert_func(**revert_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason why it is removed?
Summary
#514
transformer
Testing Done
huggingface-env
torch&hw-env
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 6
CPU max MHz: 3300.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 30 MiB (24 instances)
L3 cache: 36 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1+cu121
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] Could not collect
make test
to ensure correctnessmake checkstyle
to ensure code stylemake test-convergence
to ensure convergence