Skip to content
This repository has been archived by the owner on Nov 9, 2023. It is now read-only.

Failed to read the onnx model #23

Closed
Bob-jpg opened this issue Jan 16, 2023 · 16 comments
Closed

Failed to read the onnx model #23

Bob-jpg opened this issue Jan 16, 2023 · 16 comments

Comments

@Bob-jpg
Copy link

Bob-jpg commented Jan 16, 2023

[ INFO:0@0.111] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (797) cv::dnn::dnn4_v20220524::ONNXImporter::populateNet DNN/ONNX: loading ONNX v8 model produced by 'pytorch':1.14.0. Number of nodes = 263, initializers = 120, inputs = 1, outputs = 1
[ INFO:0@0.112] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (713) cv::dnn::dnn4_v20220524::ONNXImporter::parseOperatorSet DNN/ONNX: ONNX opset version = 17
OpenCV(4.6.0) Error: Unsupported format or combination of formats (Unsupported data type: FLOAT16) in cv::dnn::dnn4_v20220524::getMatFromTensor, file c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_graph_simplifier.cpp, line 842

@UNeedCryDear UNeedCryDear changed the title [ INFO:0@0.111] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (797) cv::dnn::dnn4_v20220524::ONNXImporter::populateNet DNN/ONNX: loading ONNX v8 model produced by 'pytorch':1.14.0. Number of nodes = 263, initializers = 120, inputs = 1, outputs = 1 [ INFO:0@0.112] global c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (713) cv::dnn::dnn4_v20220524::ONNXImporter::parseOperatorSet DNN/ONNX: ONNX opset version = 17 OpenCV(4.6.0) Error: Unsupported format or combination of formats (Unsupported data type: FLOAT16) in cv::dnn::dnn4_v20220524::getMatFromTensor, file c:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_graph_simplifier.cpp, line 842 Failed to read the onnx model Jan 16, 2023
@UNeedCryDear
Copy link
Owner

export onnx with flag --opset 12.

$ python path/to/export.py --weights yolov5s.pt --img [640,640] --opset 12 --include onnx

@Bob-jpg
Copy link
Author

Bob-jpg commented Jan 16, 2023 via email

@Bob-jpg
Copy link
Author

Bob-jpg commented Jan 16, 2023 via email

@UNeedCryDear
Copy link
Owner

Modify these parameters in your export.py:
20230116142823

and then

python export.py

@Bob-jpg
Copy link
Author

Bob-jpg commented Jan 16, 2023 via email

@Bob-jpg
Copy link
Author

Bob-jpg commented Feb 14, 2023 via email

@UNeedCryDear
Copy link
Owner

UNeedCryDear commented Feb 14, 2023

用显卡加速啊。如果你是用opencv推理的会麻烦一些,因为你需要编译iopencv的contirb包,勾选上cuda模块,重新编译opencv之后才能使用opencv-cuda加速。如果你是用的onnxruntime,确保你的cuda和cudnn安装正确的情况下,onnxruntime选择GPU的版本,然后readmodel的时候将使用cuda和cuda id设置正确就行了,后面的处理我都给你写好了。

@Bob-jpg
Copy link
Author

Bob-jpg commented Feb 14, 2023 via email

@UNeedCryDear
Copy link
Owner

你减枝之前都不看看教程啥的吗,看完教程不往下面翻一下评论的吗?评论里面说的很清楚了,模型减枝又不是改变网络结构
ultralytics/yolov5#304

@UNeedCryDear
Copy link
Owner

如果你是为了加速,除开cuda之外,目前主流的路子量化模型,通量化来达成加速。
举个例子来说,将模型量化为int8就会比FP32速度快一些,如果你的显卡支持FP16的话,FP16也会比FP32快

@Bob-jpg
Copy link
Author

Bob-jpg commented Feb 14, 2023 via email

@UNeedCryDear
Copy link
Owner

cpu那没啥法子了,你只能去看下openvino有无啥加速手段了,但是这个好像目前也仅限于IU,AU不行。
另外就是缩小模型了,比如S的模型换成N的,或者自己手动改,数据集简单的可以去掉一些卷积层看下了,不然就是换其他网络了。不过目前来说,估计也就yolov8可能会快一些了,就这些法子了。

@Bob-jpg
Copy link
Author

Bob-jpg commented Feb 14, 2023 via email

@UNeedCryDear
Copy link
Owner

v8要修改啥?你运行我的代码哪里报错?

@Bob-jpg
Copy link
Author

Bob-jpg commented Feb 14, 2023 via email

@Bob-jpg
Copy link
Author

Bob-jpg commented May 16, 2023 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants