Skip to content

Refactor benchmark #148

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Apr 20, 2023
Merged

Refactor benchmark #148

merged 23 commits into from
Apr 20, 2023

Conversation

fengyuentau
Copy link
Member

@fengyuentau fengyuentau commented Mar 17, 2023

Changes:

  • use mean as default for benchmark results.
  • change result representation to something like this:
     $ python benchmark.py --cfg ./config/face_detection_yunet.yaml
     mean=1.17, median=1.19, min=1.06, input size=[160, 120], model: YuNet with ['face_detection_yunet_2022mar.onnx']
     mean=8.66, median=8.96, min=8.17, input size=[640, 480], model: YuNet with ['face_detection_yunet_2022mar.onnx']
     mean=1.37, median=1.48, min=1.06, input size=[160, 120], model: YuNet with ['face_detection_yunet_2022mar-act_int8-wt_int8-quantized.onnx']
     mean=10.96, median=11.17, min=8.17, input size=[640, 480], model: YuNet with ['face_detection_yunet_2022mar-act_int8-wt_int8-quantized.onnx']
  • add flag --all and --cfg_exclude to run all benchmarks in one command with exclusions:
     $ python benchmark.py --all --model_exclude license_plate_detection_lpd_yunet_2023mar_int8.onnx:human_segmentation_pphumanseg_2023mar_int8.onnx
     Benchmarking ...
     backend=cv.dnn.DNN_BACKEND_OPENCV
     target=cv.dnn.DNN_TARGET_CPU
     mean       median     min        input size   model
     0.58       0.67       0.48       [160, 120]   YuNet with ['face_detection_yunet_2022mar.onnx']
     0.82       0.81       0.48       [160, 120]   YuNet with ['face_detection_yunet_2022mar_int8.onnx']
     6.18       6.33       5.83       [150, 150]   SFace with ['face_recognition_sface_2021dec.onnx']
     7.42       7.42       5.83       [150, 150]   SFace with ['face_recognition_sface_2021dec_int8.onnx']
     3.32       3.46       2.76       [112, 112]   FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
     4.27       4.22       2.76       [112, 112]   FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
     4.68       5.04       4.36       [224, 224]   MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
  • update benchmark results which uses mean as default.

@fengyuentau fengyuentau added the benchmark Benchmark related issues & PRs label Mar 17, 2023
@fengyuentau fengyuentau requested a review from WanliZhong March 17, 2023 15:20
@fengyuentau fengyuentau self-assigned this Mar 17, 2023
@fengyuentau
Copy link
Member Author

When trying to run benchmarks on all models, I found the quantized PP-HumanSeg and quantized LPD-YuNet are not supported by OpenCV DNN:

[ERROR:0@11.647] global onnx_importer.cpp:1054 handleNode DNN/ONNX: ERROR during processing node with 5 inputs and 1 outputs: [QLinearSoftmax]:(onnx_node!p2o.Softmax.0_quant) from domain='com.microsoft'
Traceback (most recent call last):
  File "opencv_zoo/benchmark/benchmark.py", line 191, in <module>
    model = model_handler(*model_path, **model_config)
  File "opencv_zoo/models/human_segmentation_pphumanseg/pphumanseg.py", line 16, in __init__
    self._model = cv.dnn.readNet(self._modelPath)
cv2.error: OpenCV(4.7.0) /Users/xperience/GHA-OCV-Python/_work/opencv-python/opencv-python/opencv/modules/dnn/src/onnx/onnx_importer.cpp:1073: error: (-2:Unspecified error) in function 'handleNode'
> Node [QLinearSoftmax@com.microsoft]:(onnx_node!p2o.Softmax.0_quant) parse error: OpenCV(4.7.0) /Users/xperience/GHA-OCV-Python/_work/opencv-python/opencv-python/opencv/modules/dnn/src/net_impl.hpp:108: error: (-2:Unspecified error) Can't create layer "onnx_node!p2o.Softmax.0_quant" of type "com.microsoft.QLinearSoftmax" in function 'getLayerInstance'
> 

Looks like operator QLinearSoftmax is not yet supported. Will create an issue and pull request for this.

@fengyuentau fengyuentau marked this pull request as ready for review April 12, 2023 08:35
@fengyuentau
Copy link
Member Author

Benchmark results on new hardware will be added in other PR.

Copy link
Member

@zihaomu zihaomu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -8,8 +8,8 @@ class Tracking(BaseMetric):
def __init__(self, **kwargs):
super().__init__(**kwargs)

if self._warmup or self._repeat:
print('warmup and repeat in metric for tracking do not function.')
# if self._warmup or self._repeat:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove?

Copy link
Member

@WanliZhong WanliZhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@fengyuentau
Copy link
Member Author

👍

@fengyuentau fengyuentau merged commit 09659f1 into opencv:master Apr 20, 2023
fengyuentau added a commit that referenced this pull request Jun 8, 2023
* use mean as default for benchmark metric; change result representation;
add --all for benchmarking all configs at a time

* fix comments

* add --model_exclude

* pretty print

* improve benchmark result table header: from band-xpu to xpu-band

* suppress print message

* update benchmark results on CPU-RPI

* add the new benchmark results on the new intel cpu

* fix backend and target setting in benchmark; pre-modify the names of int8 quantized models

* add results on jetson cpu

* add cuda results

* print target and backend when using --all

* add results on Khadas VIM3

* pretty print results

* true pretty print results

* update results in new format

* fix broken backend and target vars

* fix broken backend and target vars

* fix broken backend and target var

* update benchmark results on many devices

* add db results on Ascend-310

* update info on CPU-INTEL

* update usage of the new benchmark script
@WanliZhong WanliZhong added this to the 4.9.0 (first release) milestone Dec 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Benchmark related issues & PRs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants