PP-OCRv4_OpenVINO is a demo project that demonstrates how to perform inference using the PP-OCRv4 model with OpenVINO. The PP-OCRv4 model is a remarkable general optical character recognition (OCR) solution, which not only provides Chinese and English models in general scenarios, but also provides multilingual models covering 80 languages.
Run this project on aistudio directly, or run it locally as below.
To install the necessary dependencies for this project, follow these steps:
-
Clone the repository:
git clone https://github.com/openvino-book/PP-OCRv4_OpenVINO.git cd PP-OCRv4_OpenVINO
-
Create a virtual environment and activate it:
python3 -m venv venv source venv/bin/activate
-
Install the required packages:
pip install -r requirements.txt
-
Download the PP-OCRv4 Model to PP-OCRv4_OpenVINO folder:
# Download the detection model of PP-OCRv4 wget https://paddleocr.bj.bcebos.com/PP-OCRv4/chinese/ch_PP-OCRv4_det_infer.tar && tar -xvf ch_PP-OCRv4_det_infer.tar # Download the recognition model of PP-OCRv4 wget https://paddleocr.bj.bcebos.com/PP-OCRv4/chinese/ch_PP-OCRv4_rec_infer.tar && tar -xvf ch_PP-OCRv4_rec_infer.tar # Download the angle classifier of the PP-OCRv4 wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar && tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
To quickly start using the PP-OCRv4_OpenVINO project, follow these steps:
-
Run the inference script:
python main.py --image_dir images/general_ocr_006.png \ --det_model_dir ch_PP-OCRv4_det_infer/inference.pdmodel \ --det_model_device CPU \ --rec_model_dir ch_PP-OCRv4_rec_infer/inference.pdmodel \ --rec_model_device CPU \ --cls_model_dir ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel \ --cls_model_device CPU \ --use_angle_cls True
This project is licensed under the MIT License. See the LICENSE file for more details.
This project is based on the PP-OCRv4 model from PaddleOCR and most of inference code comes from OnnxOCR. We would like to thank the PaddleOCR team and @jingsongliujing for their contributions to the OCR community.