You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@smile2game Thank you. Qwen is not natively supported in Transformers (but Qwen2 is huggingface/transformers#28436). I tried running the export for Qwen-7B and we get:
Traceback (most recent call last):
File "/home/felix/miniconda3/envs/fx/bin/optimum-cli", line 8, in <module>
sys.exit(main())
File "/home/felix/optimum/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/home/felix/optimum/optimum/commands/export/onnx.py", line 261, in run
main_export(
File "/home/felix/optimum/optimum/exporters/onnx/__main__.py", line 351, in main_export
onnx_export_from_model(
File "/home/felix/optimum/optimum/exporters/onnx/convert.py", line 1035, in onnx_export_from_model
raise ValueError(
ValueError: Trying to export a qwen model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen to be supported natively in the ONNX export.
Feature request
I need to export the model named qwen to accelerate.
optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code
Motivation
I want to export the model qwen to use onnxruntime
Your contribution
I can give the input and output.
The text was updated successfully, but these errors were encountered: