We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python=3.8 paddlepaddle==2.2.2 paddle-serving-server==0.8.3 服务器是centos7,cpu
1.paddle_serving_client.convert转换ch_PP-OCRv3_rec_infer和ch_PP-OCRv3_det_infer模型 2.python3 web_service.py
通过post请求预测接口,同一张图片的预测结果和tools/infer/predict_system.py预测结果不一致,相差在20%
请问这个问题该如何解决
The text was updated successfully, but these errors were encountered:
Message that will be displayed on users' first issue
Sorry, something went wrong.
补充: 图片全部是960x618,200kb左右 压测serving接口100并发平均耗时在40s左右,单次耗时在2~3秒
下面图片是serving日志截图:
在不损失精度前提下,改如何提升识别速度呢?
1.精度
使用paddle_serving_client.convert转换ch_PP-OCRv3_rec_infer和ch_PP-OCRv3_det_infer模型后,比较 *.pdmodel 和 *.pdiparams 文件与原始文件的md5 值。如果相同则则不是模型保存问题,建议检查模型前处理
2.提升性能 先补充一下信息:
1.精度问题 我核实了下*.pdmodel和源文件的MD5值时不一样,但是*.pdiparams和源文件的MD5值时一样的 这是md5检测结果:
转换方式是按照ocr的文档来的,如何检查模型前处理的问题呢?
2.提升性能补充信息如下: 1.cpu推理 2.未使用量化模型 3.进程模式 4.det检测模型时20并发,rec识别模型时10并发 config.yml配置文件如下
3.目前cpu使用mkldnn加速,会推理失败,日志报错内容如下:
TeslaZhao
No branches or pull requests
python=3.8
paddlepaddle==2.2.2
paddle-serving-server==0.8.3
服务器是centos7,cpu
1.paddle_serving_client.convert转换ch_PP-OCRv3_rec_infer和ch_PP-OCRv3_det_infer模型
2.python3 web_service.py
通过post请求预测接口,同一张图片的预测结果和tools/infer/predict_system.py预测结果不一致,相差在20%
请问这个问题该如何解决
The text was updated successfully, but these errors were encountered: