Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing quantization in eval recipe #1777

Merged
merged 5 commits into from
Oct 9, 2024

Conversation

SalmanMohammadi
Copy link
Collaborator

@SalmanMohammadi SalmanMohammadi commented Oct 9, 2024

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.
Closes #1776

On main


(tune) salman@combuter:~/torchtune$ tune run eleuther_eval --config target/quantize_eval.yaml 
2024-10-09:11:08:37,651 INFO     [_logging.py:101] Running EleutherEvalRecipe with resolved config:

batch_size: 1
checkpointer:
  _component_: torchtune.training.FullModelTorchTuneCheckpointer
  checkpoint_dir: ./target/quantized_qat
  checkpoint_files:
  - pytorch_model-8da4w.pt
  model_type: LLAMA2
  output_dir: ./target/tmp
device: cuda
dtype: bf16
enable_kv_cache: true
limit: 20
max_seq_length: 2048
model:
  _component_: torchtune.models.llama2.llama2
  embed_dim: 2048
  max_seq_len: 4096
  norm_eps: 1.0e-05
  num_heads: 32
  num_kv_heads: 4
  num_layers: 22
  vocab_size: 32000
quantizer:
  _component_: torchtune.training.quantization.Int8DynActInt4WeightQuantizer
  groupsize: 256
seed: 1234
tasks:
- hellaswag
tokenizer:
  _component_: torchtune.models.llama2.llama2_tokenizer
  path: ./target/1b_normal/tokenizer.model

Traceback (most recent call last):
  File "/home/salman/.pyenv/versions/3.11.9/envs/tune/bin/tune", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/salman/torchtune/torchtune/_cli/tune.py", line 49, in main
    parser.run(args)
  File "/home/salman/torchtune/torchtune/_cli/tune.py", line 43, in run
    args.func(args)
  File "/home/salman/torchtune/torchtune/_cli/run.py", line 208, in _run_cmd
    self._run_single_device(args, is_builtin=is_builtin)
  File "/home/salman/torchtune/torchtune/_cli/run.py", line 102, in _run_single_device
    runpy.run_path(str(args.recipe), run_name="__main__")
  File "<frozen runpy>", line 291, in run_path
  File "<frozen runpy>", line 98, in _run_module_code
  File "<frozen runpy>", line 88, in _run_code
  File "/home/salman/torchtune/recipes/eleuther_eval.py", line 576, in <module>
    sys.exit(recipe_main())
             ^^^^^^^^^^^^^
  File "/home/salman/torchtune/torchtune/config/_parse.py", line 99, in wrapper
    sys.exit(recipe_main(conf))
             ^^^^^^^^^^^^^^^^^
  File "/home/salman/torchtune/recipes/eleuther_eval.py", line 571, in recipe_main
    recipe.setup(cfg=cfg)
  File "/home/salman/torchtune/recipes/eleuther_eval.py", line 494, in setup
    for k, v in model_state_dict.items():
                ^^^^^^^^^^^^^^^^
NameError: name 'model_state_dict' is not defined

On this branch

2024-10-09:10:53:00,977 INFO     [_logging.py:101] Running EleutherEvalRecipe with resolved config:

batch_size: 1
checkpointer:
  _component_: torchtune.training.FullModelTorchTuneCheckpointer
  checkpoint_dir: ./target/quantized
  checkpoint_files:
  - pytorch_model-8da4w.pt
  model_type: LLAMA2
  output_dir: ./target/tmp
device: cuda
dtype: bf16
enable_kv_cache: true
limit: 20
max_seq_length: 2048
model:
  _component_: torchtune.models.llama2.llama2
  embed_dim: 2048
  max_seq_len: 4096
  norm_eps: 1.0e-05
  num_heads: 32
  num_kv_heads: 4
  num_layers: 22
  vocab_size: 32000
quantizer:
  _component_: torchtune.training.quantization.Int8DynActInt4WeightQuantizer
  groupsize: 256
seed: 1234
tasks:
- hellaswag
tokenizer:
  _component_: torchtune.models.llama2.llama2_tokenizer
  path: ./target/1b_normal/tokenizer.model

2024-10-09:10:53:02,335 INFO     [eleuther_eval.py:503] Model is initialized with precision torch.bfloat16.
2024-10-09:10:53:02,363 INFO     [huggingface.py:132] Using device 'cuda:0'
/home/salman/.pyenv/versions/3.11.9/envs/tune/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
2024-10-09:10:53:02,727 INFO     [huggingface.py:368] Model parallel was set to False, max memory was not set, and device map was set to {'': 'cuda:0'}
2024-10-09:10:53:03,984 INFO     [__init__.py:491] `group` and `group_alias` keys in TaskConfigs are deprecated and will be removed in v0.4.5 of lm_eval. The new `tag` field will be used to allow for a shortcut to a group of tasks one does not wish to aggregate metrics across. `group`s which aggregate across subtasks must be only defined in a separate group config file, which will be the official way to create groups that support cross-task aggregation as in `mmlu`. Please see the v0.4.4 patch notes and our documentation: https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#advanced-group-configs for more information.
2024-10-09:10:53:17,167 INFO     [eleuther_eval.py:552] Running evaluation on the following tasks: ['hellaswag']
2024-10-09:10:53:17,168 INFO     [task.py:428] Building contexts for hellaswag on rank 0...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 3051.40it/s]
2024-10-09:10:53:17,183 INFO     [evaluator.py:485] Running loglikelihood requests
Running loglikelihood requests: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80/80 [00:10<00:00,  7.29it/s]
2024-10-09:10:53:28,201 INFO     [eleuther_eval.py:561] Eval completed in 11.03 seconds.
2024-10-09:10:53:28,202 INFO     [eleuther_eval.py:562] Max memory allocated: 2.38 GB
2024-10-09:10:53:28,270 INFO     [eleuther_eval.py:566] 

|  Tasks  |Version|Filter|n-shot| Metric |   |Value|   |Stderr|
|---------|------:|------|------|--------|---|----:|---|-----:|
|hellaswag|      1|none  |None  |acc     ||  0.3|±  |0.1051|
|         |       |none  |None  |acc_norm||  0.5|±  |0.1147|

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Copy link

pytorch-bot bot commented Oct 9, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1777

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 6242f2b with merge base 27b0fcc (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 9, 2024
Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a quick test to make sure we catch the TorchTuneCheckpointerError?

Also a before (not working) / after (working) for QAT in the PR description would be great.

@SalmanMohammadi
Copy link
Collaborator Author

SalmanMohammadi commented Oct 9, 2024

Can we add a quick test to make sure we catch the TorchTuneCheckpointerError?

Which error do you mean, sorry? In this PR we raise an error if the user isn't using the correct checkpointer (i.e. TT)

edit: misread, you meant a test. yep yep!

@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 0% with 10 lines in your changes missing coverage. Please review.

Project coverage is 67.35%. Comparing base (27b0fcc) to head (f472c6d).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
recipes/eleuther_eval.py 0.00% 10 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1777      +/-   ##
==========================================
- Coverage   67.38%   67.35%   -0.03%     
==========================================
  Files         305      305              
  Lines       15972    15979       +7     
==========================================
  Hits        10762    10762              
- Misses       5210     5217       +7     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

STAMP

@joecummings joecummings mentioned this pull request Oct 9, 2024
34 tasks
@SalmanMohammadi SalmanMohammadi merged commit 209d55d into pytorch:main Oct 9, 2024
17 checks passed
@SalmanMohammadi SalmanMohammadi deleted the fix_eval_quantize branch October 9, 2024 12:12
mori360 pushed a commit to mori360/torchtune that referenced this pull request Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

model eval error: NameError: name 'model_state_dict' is not defined
4 participants