Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLama 3.2 vision - unable to convert #2079

Open
4 tasks
pdufour opened this issue Oct 24, 2024 · 0 comments
Open
4 tasks

LLama 3.2 vision - unable to convert #2079

pdufour opened this issue Oct 24, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@pdufour
Copy link

pdufour commented Oct 24, 2024

System Info

python3 --version

Python 3.10.15

requirements.txt

huggingface_hub
streamlit
transformers[torch]==4.46.0
onnxruntime==1.19.2
optimum==1.23.2
onnx==1.17.0
onnxconverter-common==1.14.0
tqdm==4.66.5
onnxslim==0.1.35
--extra-index-url https://pypi.ngc.nvidia.com
onnx_graphsurgeon==0.5.2

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction (minimal, reproducible, runnable)

  1. Clone https://github.com/huggingface/transformers.js (Note: I updated the transformers version as seen in above requirements.txt)
  2. Run python3 -m scripts.convert --quantize --model_id meta-llama/Llama-3.2-11B-Vision

Gives error:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/python@3.10/3.10.15/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/homebrew/Cellar/python@3.10/3.10.15/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "transformers.js/scripts/convert.py", line 462, in <module>
    main()
  File "transformers.js/scripts/convert.py", line 349, in main
    main_export(**export_kwargs)
  File ".venv/lib/python3.10/site-packages/optimum/exporters/onnx/__main__.py", line 303, in main_export
    model = TasksManager.get_model_from_task(
  File ".venv/lib/python3.10/site-packages/optimum/exporters/tasks.py", line 2071, in get_model_from_task
    model_class = TasksManager.get_model_class_for_task(
  File ".venv/lib/python3.10/site-packages/optimum/exporters/tasks.py", line 1394, in get_model_class_for_task
    raise KeyError(
KeyError: "Unknown task: image-text-to-text. Possible values are: `audio-classification` for AutoModelForAudioClassification, `audio-frame-classification` for AutoModelForAudioFrameClassification, `audio-xvector` for AutoModelForAudioXVector, `automatic-speech-recognition` for ('AutoModelForSpeechSeq2Seq', 'AutoModelForCTC'), `depth-estimation` for AutoModelForDepthEstimation, `feature-extraction` for AutoModel, `fill-mask` for AutoModelForMaskedLM, `image-classification` for AutoModelForImageClassification, `image-segmentation` for ('AutoModelForImageSegmentation', 'AutoModelForSemanticSegmentation'), `image-to-image` for AutoModelForImageToImage, `image-to-text` for AutoModelForVision2Seq, `mask-generation` for AutoModel, `masked-im` for AutoModelForMaskedImageModeling, `multiple-choice` for AutoModelForMultipleChoice, `object-detection` for AutoModelForObjectDetection, `question-answering` for AutoModelForQuestionAnswering, `semantic-segmentation` for AutoModelForSemanticSegmentation, `text-to-audio` for ('AutoModelForTextToSpectrogram', 'AutoModelForTextToWaveform'), `text-generation` for AutoModelForCausalLM, `text2text-generation` for AutoModelForSeq2SeqLM, `text-classification` for AutoModelForSequenceClassification, `token-classification` for AutoModelForTokenClassification, `zero-shot-image-classification` for AutoModelForZeroShotImageClassification, `zero-shot-object-detection` for AutoModelForZeroShotObjectDetection"

Expected behavior

It should be able to handle this type of conversion

@pdufour pdufour added the bug Something isn't working label Oct 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant