Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't convert Core ML model to Onnx #376

Open
aof-github opened this issue Mar 18, 2020 · 12 comments
Open

Can't convert Core ML model to Onnx #376

aof-github opened this issue Mar 18, 2020 · 12 comments

Comments

@aof-github
Copy link

aof-github commented Mar 18, 2020

Hello,

I'm trying to convert a trained Core ML model (activity classification) to Onnx in order to convert it to TensorFlow Lite.
The problems is that I get errors. I've tried with different versions of python, onnxmltools, winmltools and it doesn't seems to work. I also tried docker image of onnx ecosystem with same result. Can any one help me with it? Thanks in advance.

Script

import coremltools
import onnxmltools

input_coreml_model = '../model.mlmodel'
output_onnx_model = '../model.onnx'
coreml_model = coremltools.utils.load_spec(input_coreml_model)
onnx_model = onnxmltools.convert_coreml(coreml_model)
onnxmltools.utils.save_model(onnx_model, output_onnx_model)

Error Messages

IndexError                                Traceback (most recent call last)
<ipython-input-11-94a6dc527869> in <module>
      3 
      4 # Convert the CoreML model into ONNX
----> 5 onnx_model = onnxmltools.convert_coreml(coreml_model)
      6 
      7 # Save as protobuf

/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/main.py in convert_coreml(model, name, initial_types, doc_string, target_opset, targeted_onnx, custom_conversion_functions, custom_shape_calculators)
     16     from .coreml.convert import convert
     17     return convert(model, name, initial_types, doc_string, target_opset, targeted_onnx,
---> 18                    custom_conversion_functions, custom_shape_calculators)
     19 
     20 

/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/coreml/convert.py in convert(model, name, initial_types, doc_string, target_opset, targeted_onnx, custom_conversion_functions, custom_shape_calculators)
     58     target_opset = target_opset if target_opset else get_opset_number_from_onnx()
     59     # Parse CoreML model as our internal data structure (i.e., Topology)
---> 60     topology = parse_coreml(spec, initial_types, target_opset, custom_conversion_functions, custom_shape_calculators)
     61 
     62     # Parse CoreML description, author, and license. Those information will be attached to the final ONNX model.

/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/coreml/_parse.py in parse_coreml(model, initial_types, target_opset, custom_conversion_functions, custom_shape_calculators)
    465     # Instead of using CoremlModelContainer, we directly pass the model in because _parse_model is CoreML-specific.
    466     _parse_model(topology, scope, model)
--> 467     topology.compile()
    468 
    469     for variable in topology.find_root_and_sink_variables():

/usr/local/lib/python3.6/dist-packages/onnxconverter_common/topology.py in compile(self)
    630         self._resolve_duplicates()
    631         self._fix_shapes()
--> 632         self._infer_all_types()
    633         self._check_structure()
    634 

/usr/local/lib/python3.6/dist-packages/onnxconverter_common/topology.py in _infer_all_types(self)
    506                 pass  # in Keras converter, the shape calculator can be optional.
    507             else:
--> 508                 operator.infer_types()
    509 
    510     def _resolve_duplicates(self):

/usr/local/lib/python3.6/dist-packages/onnxconverter_common/topology.py in infer_types(self)
    108     def infer_types(self):
    109         # Invoke a core inference function
--> 110         registration.get_shape_calculator(self.type)(self)
    111 
    112 

/usr/local/lib/python3.6/dist-packages/onnxmltools/convert/coreml/shape_calculators/neural_network/Concat.py in calculate_concat_output_shapes(operator)
     22         if variable.type.shape[0] != 'None' and variable.type.shape[0] != output_shape[0]:
     23             raise RuntimeError('Only dimensions along C-axis can be different')
---> 24         if variable.type.shape[2] != 'None' and variable.type.shape[2] != output_shape[2]:
     25             raise RuntimeError('Only dimensions along C-axis can be different')
     26         if variable.type.shape[3] != 'None' and variable.type.shape[3] != output_shape[3]:

IndexError: list index out of range
@jiafatom
Copy link
Collaborator

Can you share the model, please?

@Laubeee
Copy link

Laubeee commented Jul 7, 2020

I have a similar error, trying to convert openpose COCO model. To obtain it use getModels script: https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/models The convertsion to CoreML works fine, but from CoreML to onnx breaks:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.5/dist-packages/onnxmltools/convert/main.py", line 19, in convert_coreml
    custom_conversion_functions, custom_shape_calculators)
  File "/usr/local/lib/python3.5/dist-packages/onnxmltools/convert/coreml/convert.py", line 81, in convert
    onnx_model = convert_topology(topology, name, doc_string, target_opset, targeted_onnx)
  File "/usr/local/lib/python3.5/dist-packages/onnxconverter_common/topology.py", line 796, in convert_topology
    nodes = optimize_onnx(container.nodes, nhwc_inputs, container.inputs + extra_inputs, container.outputs)
  File "/usr/local/lib/python3.5/dist-packages/onnxconverter_common/optimizer.py", line 1636, in optimize_onnx
    node_list = _process_optimization(node_list, target_opset)
  File "/usr/local/lib/python3.5/dist-packages/onnxconverter_common/optimizer.py", line 1547, in _process_optimization
    solution = optm.find(node_)
  File "/usr/local/lib/python3.5/dist-packages/onnxconverter_common/optimizer.py", line 1454, in find
    if MergeCommonSequenceOptimizer.is_same_node_merge(succ_0, succ_1, node):
  File "/usr/local/lib/python3.5/dist-packages/onnxconverter_common/optimizer.py", line 1517, in is_same_node_merge
    val_0 = numpy_helper.to_array(pred_0.tensors[0])
IndexError: list index out of range

@Laubeee
Copy link

Laubeee commented Jul 7, 2020

Trying to convert the body_25 model, I get a different error:

  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.5/dist-packages/onnxmltools/convert/main.py", line 19, in convert_coreml
    custom_conversion_functions, custom_shape_calculators)
  File "/usr/local/lib/python3.5/dist-packages/onnxmltools/convert/coreml/convert.py", line 81, in convert
    onnx_model = convert_topology(topology, name, doc_string, target_opset, targeted_onnx)
  File "/usr/local/lib/python3.5/dist-packages/onnxconverter_common/topology.py", line 776, in convert_topology
    get_converter(operator.type)(scope, operator, container)
  File "/usr/local/lib/python3.5/dist-packages/onnxmltools/convert/coreml/operator_converters/neural_network/Activation.py", line 27, in convert_activation
    apply_prelu(scope, inputs, outputs, container, operator_name=attrs['name'], slope=[params.PReLU.alpha])
  File "/usr/local/lib/python3.5/dist-packages/onnxconverter_common/onnx_ops.py", line 739, in apply_prelu
    s_shape = slope.shape
AttributeError: 'list' object has no attribute 'shape'

@bwery
Copy link

bwery commented Sep 11, 2020

I encounter the same problem as Laubee trying to convert a Caffe model using coreml as an intermediate representation. Conversion from Caffe to coreml works fine but conversion from coreml to ONNX crashes exactly as described by Laubee in his first post on 7 of July (IndexError: list index out of range) . I have exactly the same traceback.

I use version 1.7.0 on Python 3.6.9.

@Laubeee
Copy link

Laubeee commented Sep 11, 2020

btw: the openVINO model optimizer also had trouble with the same model. A workaround seems to be to define a fix input size (in the model its 1x1 px as its defined at runtime) but some report this behaves strange as it gives quite different results compared to the original model... see: openvinotoolkit/openvino#1307

@bwery do you have a similar input structure? If you like you could try to fix the input size at conversion time and see if that helps.

@bwery
Copy link

bwery commented Sep 11, 2020

Hello Laubeee,

My network has a single fixed size input (single colour picture).
The main "unusual" element in this network is that it produces multiple outputs in different branches, and also include "concatenation" operators.

@bwery
Copy link

bwery commented Sep 11, 2020

I just made a trial modifying the network to concatenate the outputs and mix them in order to have a single output. I still have the same error. This means the fact that I have multiple outputs is not the cause of the problem.

@bwery
Copy link

bwery commented Sep 11, 2020

Investigating a little more, I have found that the problem is clearly inside the "optimize_onnx" routine from the file "optimizer.py" in imported project "onnxconverter_common".

As this "optimize_onnx" routine takes a topology in input to generate an "optimized" topology in output, both belonging to the same class and as this operation seems to be optional (it is controlled through flag "container.enable_optimizer" but I do not see what would make this flag false), there is a workaround which is simply to skip this optimization step.

To implement the work around, I have replaced line 796 in topology.py by content of line 798.

My network now is properly converted and operational. A look in Netron shows its structure is what I was expecting.

@Laubeee
Copy link

Laubeee commented Nov 16, 2020

by the way, the dynamic input size causes problems in my case. I had to fix them similar to what is described here:

onnx_model.graph.input[0].type.tensor_type.shape.dim[2].dim_param = 128  # height
onnx_model.graph.input[0].type.tensor_type.shape.dim[3].dim_param = 224  # width

@npapapietro
Copy link

Investigating a little more, I have found that the problem is clearly inside the "optimize_onnx" routine from the file "optimizer.py" in imported project "onnxconverter_common".

As this "optimize_onnx" routine takes a topology in input to generate an "optimized" topology in output, both belonging to the same class and as this operation seems to be optional (it is controlled through flag "container.enable_optimizer" but I do not see what would make this flag false), there is a workaround which is simply to skip this optimization step.

To implement the work around, I have replaced line 796 in topology.py by content of line 798.

My network now is properly converted and operational. A look in Netron shows its structure is what I was expecting.

This work around fixed my error too.

Was getting

  File "/home/user/miniconda3/envs/caffe/lib/python3.8/site-packages/onnxconverter_common/optimizer.py", line 1517, in is_same_node_merge
    val_0 = numpy_helper.to_array(pred_0.tensors[0])
IndexError: list index out of range

and setting the optimization flag to false did the trick.

@jiafatom
Copy link
Collaborator

jiafatom commented Dec 15, 2020

@Laubeee @bwery @npapapietro The line 1517 bug has been fixed for a while. We have a check before that line. Can you pull onnxconverter-common master branch and retry?
By the way, we plan to have a new onnxconverter-common release probably by the end of this month.

@bwery
Copy link

bwery commented Mar 17, 2021

I apologize for this long delay before answering.

I have upgraded now to releases 1.8.0 of onnxconverter-common and onnxmltools. This problem appears to be solved on my side.

Thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants