Skip to content

Commit

Permalink
Fix for llava models not generating text with test failures in 1.19 (h…
Browse files Browse the repository at this point in the history
  • Loading branch information
tthakkal authored and imangohari1 committed Dec 10, 2024
1 parent f23bbb8 commit b32e148
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
2 changes: 1 addition & 1 deletion examples/image-to-text/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ QUANT_CONFIG=./quantization_config/maxabs_measure.json python run_pipeline.py \

Here is an example to quantize the model based on previous measurements for Llava-1.5-7b:
```bash
QUANT_CONFIG=./quantization_config/maxabs_quant.json python run_pipeline.py \
QUANT_CONFIG=./quantization_config/maxabs_quant_scale_format_const.json python run_pipeline.py \
--model_name_or_path llava-hf/llava-1.5-7b-hf \
--image_path "https://llava-vl.github.io/static/images/view.jpg" \
--use_hpu_graphs \
Expand Down
2 changes: 2 additions & 0 deletions tests/test_image_to_text_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,8 @@ def _test_image_to_text(
"llava-hf/llava-v1.6-mistral-7b-hf",
"llava-hf/llava-v1.6-vicuna-7b-hf",
"llava-hf/llava-v1.6-vicuna-13b-hf",
"llava-hf/llava-1.5-7b-hf",
"llava-hf/llava-1.5-13b-hf",
]:
quant_file_path = "image-to-text/quantization_config/maxabs_quant_scale_format_const.json"

Expand Down

0 comments on commit b32e148

Please sign in to comment.