Skip to content

Commit

Permalink
Fixed RKNN Quantized Bug and Update to 1.2.9 (#1388)
Browse files Browse the repository at this point in the history
* update

* update

* update

* update

* update

* update
  • Loading branch information
Zheng-Bicheng authored Sep 20, 2024
1 parent 253b9e7 commit 2f9ad56
Show file tree
Hide file tree
Showing 15 changed files with 1,546 additions and 1,314 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ paddle2onnx --model_dir saved_inference_model \
| --opset_version | **[可选]** 配置转换为 ONNX 的 OpSet 版本,目前支持 7~16 等多个版本,默认为 9 |
| --enable_onnx_checker | **[可选]** 配置是否检查导出为 ONNX 模型的正确性, 建议打开此开关, 默认为 False |
| --enable_auto_update_opset | **[可选]** 是否开启 opset version 自动升级功能,当低版本 opset 无法转换时,自动选择更高版本的 opset进行转换, 默认为 True |
| --deploy_backend | **[可选]** 量化模型部署的推理引擎,支持 onnxruntimetensorrt 或 others,当选择 others 时,所有的量化信息存储于 max_range.txt 文件中,默认为 onnxruntime |
| --deploy_backend | **[可选]** 量化模型部署的推理引擎,支持 onnxruntime/rknn/tensorrt, 默认为 onnxruntime |
| --save_calibration_file | **[可选]** TensorRT 8.X版本部署量化模型需要读取的 cache 文件的保存路径,默认为 calibration.cache |
| --version | **[可选]** 查看 paddle2onnx 版本 |
| --external_filename | **[可选]** 当导出的 ONNX 模型大于 2G 时,需要设置 external data 的存储路径,推荐设置为:external_data |
Expand Down
2 changes: 1 addition & 1 deletion README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ The adjustable conversion parameters are listed in the following table:
| --opset_version | **[Optional]** Configure the OpSet version converted to ONNX, currently supports multiple versions such as 7~16, the default is 9 |
| --enable_onnx_checker | **[Optional]** Configure whether to check the correctness of the exported ONNX model, it is recommended to turn on this switch, the default is False |
| --enable_auto_update_opset | **[Optional]** Whether to enable the opset version automatic upgrade function, when the lower version of the opset cannot be converted, automatically select the higher version of the opset for conversion, the default is True |
| --deploy_backend | **[Optional]** Inference engine for quantitative model deployment, supports onnxruntime, tensorrt or others, when other is selected, all quantization information is stored in the max_range.txt file, the default is onnxruntime |
| --deploy_backend | **[Optional]** Inference engine for quantitative model deployment, supports onnxruntime/rknn/tensorrt, the default is onnxruntime |
| --save_calibration_file | **[Optional]** TensorRT 8.X version deploys the cache file that needs to be read to save the path of the quantitative model, the default is calibration.cache |
| --version | **[Optional]** View paddle2onnx version |
| --external_filename | **[Optional]** When the exported ONNX model is larger than 2G, you need to set the storage path of external data, the recommended setting is: external_data |
Expand Down
Loading

0 comments on commit 2f9ad56

Please sign in to comment.