Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't import ONNX model #17

Open
franperezlopez opened this issue May 18, 2018 · 3 comments
Open

can't import ONNX model #17

franperezlopez opened this issue May 18, 2018 · 3 comments

Comments

@franperezlopez
Copy link

I'm trying to import an ONNX model generated by the Cognitive Service - Custom Vision - Classifier . I've got the following output log in the AI Tools window:

C:\Users\frank\AppData\Local\amlworkbench\Python\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Traceback (most recent call last):
  File "C:\USERS\FRANK\APPDATA\LOCAL\MICROSOFT\VISUALSTUDIO\15.0_15C46C60\EXTENSIONS\JIZDA1Z2.1ND\RuntimeSDK\mlscoring\exporter\model_preprocess.py", line 503, in <module>
    main()
  File "C:\USERS\FRANK\APPDATA\LOCAL\MICROSOFT\VISUALSTUDIO\15.0_15C46C60\EXTENSIONS\JIZDA1Z2.1ND\RuntimeSDK\mlscoring\exporter\model_preprocess.py", line 59, in main
    pre_process(preprocessParams, args.result_path)
  File "C:\USERS\FRANK\APPDATA\LOCAL\MICROSOFT\VISUALSTUDIO\15.0_15C46C60\EXTENSIONS\JIZDA1Z2.1ND\RuntimeSDK\mlscoring\exporter\model_preprocess.py", line 80, in pre_process
    pre_process_onnx_model(preprocessParams, result_path)
  File "C:\USERS\FRANK\APPDATA\LOCAL\MICROSOFT\VISUALSTUDIO\15.0_15C46C60\EXTENSIONS\JIZDA1Z2.1ND\RuntimeSDK\mlscoring\exporter\model_preprocess.py", line 113, in pre_process_onnx_model
    interfaces,_,_,_= extract_onnx_model_information(preprocessParams.src_path)
  File "C:\USERS\FRANK\APPDATA\LOCAL\MICROSOFT\VISUALSTUDIO\15.0_15C46C60\EXTENSIONS\JIZDA1Z2.1ND\RuntimeSDK\mlscoring\exporter\onnx_exporter\onnx_exporter.py", line 197, in extract_onnx_model_information
    sn_type = _onnx_type_to_mlscoring_type(data_type_name)
  File "C:\USERS\FRANK\APPDATA\LOCAL\MICROSOFT\VISUALSTUDIO\15.0_15C46C60\EXTENSIONS\JIZDA1Z2.1ND\RuntimeSDK\mlscoring\exporter\onnx_exporter\onnx_exporter.py", line 211, in _onnx_type_to_mlscoring_type
    ERROR_CODE["UNSUPPORTED_DATA_TYPE_ERROR"][1].format(onnx_type))
mlscoring.exporter.exception.ExportException: 208:'Using unsupported tensor data type UNDEFINED'
@shishaochen
Copy link
Contributor

According to log, one of output nodes in your ONNX model uses an unsupported data type while accepted ones are float,double,int64,int32,int16,int8,uint16,uint8,bool, and string.
Could you open the ONNX model using Netron to find data types of all input/output nodes? This may help our debugging.

@franperezlopez
Copy link
Author

help yourself ... I shared the model file:
https://1drv.ms/u/s!AriUTSXY-Oo8mYhgzqXKLlq82KHIAA

@shishaochen
Copy link
Contributor

shishaochen commented May 19, 2018

Checking your model, the output nodes are:

  • loss: <string,float[1]>
  • classLabel: string[1]

The data type of the "loss" node is a map which is not supported yet.
Supposing the "classLabel" node is enough, you can remove the "loss" node by pruning or reconstructing your inference graph.
Then your can create an model inference project based on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants