Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new mllm eval #317

Merged
merged 19 commits into from
Nov 15, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions auto_round/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,8 @@ def run_mllm():
tune(args)

def run_lmms():
try:
import importlib
importlib.import_module("lmms_eval")
except:
raise ImportError("please install the lmms_eval firt.")
from transformers.utils.versions import require_version
require_version("lmms_eval", "please install the lmms_eval firt.")
n1ck-guo marked this conversation as resolved.
Show resolved Hide resolved
# from auto_round.script.lmms_eval import setup_lmms_args, eval
from auto_round.script.mllm import setup_lmms_parser, lmms_eval
args = setup_lmms_parser()
Expand Down
5 changes: 2 additions & 3 deletions auto_round/mllm/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
# AutoRound for MLLMs
## Basic Usage (Gaudi2/CPU/GPU)
A user guide detailing the full list of supported arguments is provided by calling ```auto-round -h``` on the terminal.Alternatively, you can use ```auto_round``` instead of ```auto-round```. Set the format you want in `format` and
A user guide detailing the full list of supported arguments is provided by calling ```auto-round-mllm -h``` on the terminal.Alternatively, you can use ```auto_round_mllm``` instead of ```auto-round-mllm```. Set the format you want in `format` and
multiple formats exporting has been supported.

n1ck-guo marked this conversation as resolved.
Show resolved Hide resolved
```bash
auto—round-mllm \
--model Qwen/Qwen2-VL-2B-Instruct\
--bits 4 \
--batch_size 1 \
--nsamples 128 \
--gradient_accumulate_steps 4 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment "experimental feature, default hyperparameters may be changed later"

--group_size 128 \
--format "auto_round" \
Expand Down Expand Up @@ -37,7 +36,7 @@ autoround.save_quantized(output_dir, format='auto_round', inplace=True)
```

### Dataset
For mllm, we used liuhaotian/llava_conv_58k as our default calib datasets. Through command ```--dataset```, user can use other datasets such as "liuhaotian/llava_instruct_80k", "liuhaotian/llava_instruct_150k" or a file path to use local file.
For mllm, we used liuhaotian/llava_conv_58k as our default calib datasets. Through argument ```--dataset```, user can use other datasets such as "liuhaotian/llava_instruct_80k", "liuhaotian/llava_instruct_150k" or a file path to use local file.

### Limitation
So far, auto-round for mllm supports five model families, include Qwen2, Llama, Phi3v, Llava and CogVLM2.
n1ck-guo marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
7 changes: 3 additions & 4 deletions auto_round/mllm/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,10 +348,7 @@ def lmms_eval(
use_cache=None,
apply_chat_template=False
):
try:
from auto_round import AutoRoundConfig
except:
from auto_round.auto_quantizer import AutoHfQuantizer
from auto_round import AutoRoundConfig

if isinstance(tasks, str):
tasks = tasks.replace(' ', '').split(',')
Expand Down Expand Up @@ -379,6 +376,8 @@ def lmms_eval(
model_args = f"model_id_name={model}"
else:
model_args = f"pretrained={model}"
if MODEL_TYPE_TO_LMMS_MODEL[model_type] == "llama_vision":
model_args += f",device_map={device}"
results = _lmms_eval.evaluator.simple_evaluate(
model=MODEL_TYPE_TO_LMMS_MODEL[model_type],
model_args=model_args,
Expand Down
Loading