-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't convert Core ML model to Onnx #376
Comments
Can you share the model, please? |
I have a similar error, trying to convert openpose COCO model. To obtain it use getModels script: https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/models The convertsion to CoreML works fine, but from CoreML to onnx breaks:
|
Trying to convert the body_25 model, I get a different error:
|
I encounter the same problem as Laubee trying to convert a Caffe model using coreml as an intermediate representation. Conversion from Caffe to coreml works fine but conversion from coreml to ONNX crashes exactly as described by Laubee in his first post on 7 of July (IndexError: list index out of range) . I have exactly the same traceback. I use version 1.7.0 on Python 3.6.9. |
btw: the openVINO model optimizer also had trouble with the same model. A workaround seems to be to define a fix input size (in the model its 1x1 px as its defined at runtime) but some report this behaves strange as it gives quite different results compared to the original model... see: openvinotoolkit/openvino#1307 @bwery do you have a similar input structure? If you like you could try to fix the input size at conversion time and see if that helps. |
Hello Laubeee, My network has a single fixed size input (single colour picture). |
I just made a trial modifying the network to concatenate the outputs and mix them in order to have a single output. I still have the same error. This means the fact that I have multiple outputs is not the cause of the problem. |
Investigating a little more, I have found that the problem is clearly inside the "optimize_onnx" routine from the file "optimizer.py" in imported project "onnxconverter_common". As this "optimize_onnx" routine takes a topology in input to generate an "optimized" topology in output, both belonging to the same class and as this operation seems to be optional (it is controlled through flag "container.enable_optimizer" but I do not see what would make this flag false), there is a workaround which is simply to skip this optimization step. To implement the work around, I have replaced line 796 in topology.py by content of line 798. My network now is properly converted and operational. A look in Netron shows its structure is what I was expecting. |
by the way, the dynamic input size causes problems in my case. I had to fix them similar to what is described here: onnx_model.graph.input[0].type.tensor_type.shape.dim[2].dim_param = 128 # height
onnx_model.graph.input[0].type.tensor_type.shape.dim[3].dim_param = 224 # width |
This work around fixed my error too. Was getting File "/home/user/miniconda3/envs/caffe/lib/python3.8/site-packages/onnxconverter_common/optimizer.py", line 1517, in is_same_node_merge
val_0 = numpy_helper.to_array(pred_0.tensors[0])
IndexError: list index out of range and setting the optimization flag to false did the trick. |
@Laubeee @bwery @npapapietro The line 1517 bug has been fixed for a while. We have a check before that line. Can you pull onnxconverter-common master branch and retry? |
I apologize for this long delay before answering. I have upgraded now to releases 1.8.0 of onnxconverter-common and onnxmltools. This problem appears to be solved on my side. Thank you ! |
Hello,
I'm trying to convert a trained Core ML model (activity classification) to Onnx in order to convert it to TensorFlow Lite.
The problems is that I get errors. I've tried with different versions of python, onnxmltools, winmltools and it doesn't seems to work. I also tried docker image of onnx ecosystem with same result. Can any one help me with it? Thanks in advance.
Script
Error Messages
The text was updated successfully, but these errors were encountered: