Skip to content

Latest commit

 

History

History
executable file
·
80 lines (66 loc) · 4.09 KB

kunlunxin.md

File metadata and controls

executable file
·
80 lines (66 loc) · 4.09 KB

English | 中文

How to Build KunlunXin XPU Deployment Environment

FastDeploy supports deployment AI on KunlunXin XPU based on Paddle Lite backend. For more detailed information, please refer to: Paddle Lite Deployment Example.

This document describes how to compile the C++ FastDeploy library based on Paddle Lite.

The relevant compilation options are described as follows:

Compile Options Default Values Description Remarks
ENABLE_LITE_BACKEND OFF It needs to be set to ON when compiling the RK library -
WITH_KUNLUNXIN OFF It needs to be set to ON when compiling the KunlunXin XPU library -
ENABLE_ORT_BACKEND OFF whether to intergrate ONNX Runtime backend -
ENABLE_PADDLE_BACKEND OFF whether to intergrate Paddle Inference backend -
ENABLE_OPENVINO_BACKEND OFF whether to intergrate OpenVINO backend -
ENABLE_VISION OFF whether to intergrate vision models -
ENABLE_TEXT OFF whether to intergrate text models -

The configuration for third libraries(Optional, if the following option is not defined, the prebuilt third libraries will download automaticly while building FastDeploy).

Option Description
ORT_DIRECTORY While ENABLE_ORT_BACKEND=ON, use ORT_DIRECTORY to specify your own ONNX Runtime library path.
OPENCV_DIRECTORY While ENABLE_VISION=ON, use OPENCV_DIRECTORY to specify your own OpenCV library path.
OPENVINO_DIRECTORY While ENABLE_OPENVINO_BACKEND=ON, use OPENVINO_DIRECTORY to specify your own OpenVINO library path.

For more compilation options, please refer to Description of FastDeploy compilation options

C++ FastDeploy library compilation based on Paddle Lite

  • OS: Linux
  • gcc/g++: version >= 8.2
  • cmake: version >= 3.15

It it recommend install OpenCV library manually, and define -DOPENCV_DIRECTORY to set path of OpenCV library(If the flag is not defined, a prebuilt OpenCV library will be downloaded automaticly while building FastDeploy, but the prebuilt OpenCV cannot support reading video file or other function e.g imshow)

sudo apt-get install libopencv-dev

The compilation command is as follows:

# Download the latest source code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy  
mkdir build && cd build

# CMake configuration with KunlunXin xpu toolchain
cmake -DWITH_KUNLUNXIN=ON  \
      -DWITH_GPU=OFF  \
      -DENABLE_ORT_BACKEND=ON  \
      -DENABLE_PADDLE_BACKEND=ON  \
      -DCMAKE_INSTALL_PREFIX=fastdeploy-kunlunxin \
      -DENABLE_VISION=ON \
      -DOPENCV_DIRECTORY=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
      ..

# Build FastDeploy KunlunXin XPU C++ SDK
make -j8
make install

After the compilation is complete, the fastdeploy-kunlunxin directory will be generated, indicating that the Padddle Lite based FastDeploy library has been compiled.

Python compile

The compilation command is as follows:

git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/python
export WITH_KUNLUNXIN=ON
export WITH_GPU=OFF
export ENABLE_ORT_BACKEND=ON
export ENABLE_PADDLE_BACKEND=ON
export ENABLE_VISION=ON
# The OPENCV_DIRECTORY is optional, if not exported, a prebuilt OpenCV library will be downloaded
export OPENCV_DIRECTORY=/usr/lib/x86_64-linux-gnu/cmake/opencv4

python setup.py build
python setup.py bdist_wheel

After the compilation is completed, the compiled wheel package will be generated in the FastDeploy/python/dist directory, just pip install it directly

During the compilation process, if you modify the compilation parameters, in order to avoid the cache impact, you can delete the two subdirectories build and .setuptools-cmake-build under the FastDeploy/python directory and then recompile.