You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ONNX has been a well-known effort towards cross-framework compatibility. However, to fully support ONNX, there are several difficulties that might or might not be solvable. As a result, Paddle's main inference frameworks should probably mainly depend on our internal format for now.
The set of operators defined by ONNX is not enough to cover all common models.
If an operator is used by Paddle to train a model and that operator cannot be represented by ONNX, then the trained model cannot be converted to ONNX. As a result, no other framework can use the model through ONNX.
The concept of LodTensor is not supported by ONNX. A large part of Paddle uses LodTensor. However, LodTensor is an unknown concept to ONNX. How to map Paddle operators that uses LodTensor to ONNX operators is an open question.
Many Paddle operators, though has similar names in ONNX, don't work the same way. For example, multiply in Paddle might do the reshape if the input doesn't match the shape it expects. However multiply in ONNX might not accept reshape behavior. Enforcing consistency between all Paddle and ONNX operators can be a non-trivial effort.
Currently, we are only working on Paddle to ONNX conversion, but not ONNX to Paddle conversion. This is strange because users can leave Paddle by converting Paddle model to, say Caffe, through ONNX conversion. But user cannot move from Caffe to Paddle through ONNX to Paddle conversion.
The text was updated successfully, but these errors were encountered:
Re: 4, we will get to it. But do you expect users to train their Caffe graphs in Paddle? Or are you referring to bringing in Caffe2 models for inference by Paddle?
您好,此issue在近一个月内暂无更新,我们将于今天内关闭。若在关闭后您仍需跟进提问,可重新开启此问题,我们将在24小时内回复您。因关闭带来的不便我们深表歉意,请您谅解~感谢您对PaddlePaddle的支持!
Hello, this issue has not been updated in the past month. We will close it today for the sake of other user‘s experience. If you still need to follow up on this question after closing, please feel free to reopen it. In that case, we will get back to you within 24 hours. We apologize for the inconvenience caused by the closure and thank you so much for your support of PaddlePaddle Group!
ONNX has been a well-known effort towards cross-framework compatibility. However, to fully support ONNX, there are several difficulties that might or might not be solvable. As a result, Paddle's main inference frameworks should probably mainly depend on our internal format for now.
The set of operators defined by ONNX is not enough to cover all common models.
If an operator is used by Paddle to train a model and that operator cannot be represented by ONNX, then the trained model cannot be converted to ONNX. As a result, no other framework can use the model through ONNX.
The concept of LodTensor is not supported by ONNX. A large part of Paddle uses LodTensor. However, LodTensor is an unknown concept to ONNX. How to map Paddle operators that uses LodTensor to ONNX operators is an open question.
Many Paddle operators, though has similar names in ONNX, don't work the same way. For example, multiply in Paddle might do the reshape if the input doesn't match the shape it expects. However multiply in ONNX might not accept reshape behavior. Enforcing consistency between all Paddle and ONNX operators can be a non-trivial effort.
Currently, we are only working on Paddle to ONNX conversion, but not ONNX to Paddle conversion. This is strange because users can leave Paddle by converting Paddle model to, say Caffe, through ONNX conversion. But user cannot move from Caffe to Paddle through ONNX to Paddle conversion.
The text was updated successfully, but these errors were encountered: