Skip to content

Latest commit

 

History

History
125 lines (91 loc) · 4.71 KB

File metadata and controls

125 lines (91 loc) · 4.71 KB

English | 简体中文

Paddle.js WeChat mini-program Demo

1 Introduction

This directory contains the text detection, text recognition mini-program demo, by using Paddle.js and Paddle.js WeChat mini-program plugin to complete the text detection frame selection effect on the mini-program using the computing power of the user terminal.

2. Project start

2.1 Preparations

For details, please refer to document.

2.2 Startup steps

1. Clone the demo code

git clone https://github.com/PaddlePaddle/FastDeploy
cd FastDeploy/examples/application/js/mini_program

2. Enter the mini-program directory and install dependencies

# Run the text recognition demo and enter the ocrXcx directory
cd ./ocrXcx && npm install
# Run the text detection demo and enter the ocrdetectXcx directory
# cd ./ocrdetectXcx && npm install

3. WeChat mini-program import code

Open WeChat Developer Tools --> Import --> Select a directory and enter relevant information

4. Add Paddle.js WeChat mini-program plugin

Mini Program Management Interface --> Settings --> Third Party Settings --> Plugin Management --> Add Plugins --> Search for wx7138a7bb793608c3 and add Reference document

5. Build dependencies

Click on the menu bar in the developer tools: Tools --> Build npm

Reason: The node_modules directory will not be involved in compiling, uploading and packaging. If a small program wants to use npm packages, it must go through the process of "building npm". After the construction is completed, a miniprogram_npm directory will be generated, which will store the built and packaged npm packages. It is the npm package that the mini-program actually uses. * Reference Documentation

2.3 visualization

3. Model inference pipeline

// Introduce paddlejs and paddlejs-plugin, register the mini-program environment variables and the appropriate backend
import * as paddlejs from '@paddlejs/paddlejs-core';
import '@paddlejs/paddlejs-backend-webgl';
const plugin = requirePlugin('paddlejs-plugin');
plugin.register(paddlejs, wx);

// Initialize the inference engine
const runner = new paddlejs.Runner({modelPath, feedShape, mean, std});
await runner.init();

// get image information
wx.canvasGetImageData({
    canvasId: canvasId,
    x: 0,
    y: 0,
    width: canvas.width,
    height: canvas.height,
    success(res) {
        // inference prediction
        runner.predict({
            data: res.data,
            width: canvas.width,
            height: canvas.height,
        }, function (data) {
            // get the inference result
            console.log(data)
        });
    }
});

4. FAQ

  • 4.1 An error occurs Invalid context type [webgl2] for Canvas#getContext

    A: You can leave it alone, it will not affect the normal code operation and demo function

  • 4.2 Preview can't see the result

    A: It is recommended to try real machine debugging

  • 4.3 A black screen appears in the WeChat developer tool, and then there are too many errors

    A: Restart WeChat Developer Tools

  • 4.4 The debugging results of the simulation and the real machine are inconsistent; the simulation cannot detect the text, etc.

    A: The real machine can prevail; If the simulation cannot detect the text, etc., you can try to change the code at will (add, delete, newline, etc.) and then click to compile

  • 4.5 Prompts such as no response for a long time appear when the phone is debugged or running

    A: Please continue to wait, model inference will take some time