Skip to content

Sample projects for TensorFlow Lite in C++ with delegates such as GPU, EdgeTPU, XNNPACK, NNAPI

License

Notifications You must be signed in to change notification settings

iwatake2222/play_with_tflite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

c2f91d6 · Jul 18, 2022
Sep 14, 2021
Aug 16, 2021
Mar 29, 2022
Dec 13, 2021
Oct 24, 2021
Mar 30, 2022
Aug 14, 2021
Sep 4, 2021
Aug 14, 2021
May 4, 2022
Aug 22, 2021
Aug 14, 2021
Jul 18, 2022
Aug 14, 2021
Aug 14, 2021
Aug 14, 2021
Feb 12, 2022
Jan 14, 2022
Aug 20, 2021
Aug 14, 2021
Aug 14, 2021
Sep 23, 2021
Aug 14, 2021
Aug 14, 2021
Mar 30, 2022
Aug 14, 2021
Aug 31, 2021
Feb 7, 2022
Feb 19, 2022
Mar 31, 2022
Aug 20, 2021
Aug 19, 2021
Aug 19, 2021
Aug 19, 2021
Dec 15, 2021
Jan 28, 2022
Apr 29, 2022
Aug 14, 2021
Aug 14, 2021
Sep 5, 2021
Aug 14, 2021
Aug 14, 2021
Aug 20, 2021
Feb 19, 2022
Jul 23, 2020
Jul 23, 2020
Aug 21, 2020
Sep 5, 2020
Jan 10, 2021
Jul 25, 2020
Aug 1, 2021
Mar 13, 2022
Feb 12, 2022

Repository files navigation

Play with tflite

  • Sample projects to use TensorFlow Lite in C++ for multi-platform
  • Typical project structure is like the following diagram
    • 00_doc/design.jpg

Target

  • Platform
    • Linux (x64)
    • Linux (armv7)
    • Linux (aarch64)
    • Android (aarch64)
    • Windows (x64). Visual Studio 2019
  • Delegate
    • Edge TPU
    • XNNPACK
    • GPU
    • NNAPI(CPU, GPU, DSP)

Usage

./main [input]

 - input = blank
    - use the default image file set in source code (main.cpp)
    - e.g. ./main
 - input = *.mp4, *.avi, *.webm
    - use video file
    - e.g. ./main test.mp4
 - input = *.jpg, *.png, *.bmp
    - use image file
    - e.g. ./main test.jpg
 - input = number (e.g. 0, 1, 2, ...)
    - use camera
    - e.g. ./main 0

How to build a project

0. Requirements

  • OpenCV 4.x

1. Download

  • Download source code and pre-built libraries
    git clone https://github.com/iwatake2222/play_with_tflite.git
    cd play_with_tflite
    git submodule update --init
    sh InferenceHelper/third_party/download_prebuilt_libraries.sh
  • Download models
    sh ./download_resource.sh

2-a. Build in Linux

cd pj_tflite_cls_mobilenet_v2   # for example
mkdir -p build && cd build
cmake ..
make
./main

2-b. Build in Windows (Visual Studio)

  • Configure and Generate a new project using cmake-gui for Visual Studio 2019 64-bit
    • Where is the source code : path-to-play_with_tflite/pj_tflite_cls_mobilenet_v2 (for example)
    • Where to build the binaries : path-to-build (any)
  • Open main.sln
  • Set main project as a startup project, then build and run!

2-c. Build in Android Studio

  • Please refer to
  • Copy resource directory to /storage/emulated/0/Android/data/com.iwatake.viewandroidtflite/files/Documents/resource
    • the directory will be created after running the app (so the first run should fail because model files cannot be read)
  • Modify ViewAndroid\app\src\main\cpp\CMakeLists.txt to select a image processor you want to use
    • set(ImageProcessor_DIR "${CMAKE_CURRENT_LIST_DIR}/../../../../../pj_tflite_cls_mobilenet_v2/image_processor")
    • replace pj_tflite_cls_mobilenet_v2 to another
  • By default, InferenceHelper::TENSORFLOW_LITE_DELEGATE_XNNPACK is used. You can modify ViewAndroid\app\src\main\cpp\CMakeLists.txt to select which delegate to use. It's better to use InferenceHelper::TENSORFLOW_LITE_GPU to get high performance.
    • You also need to select framework when calling InferenceHelper::create .

Note

Options (Delegate)

# Edge TPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=on  -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
cp libedgetpu.so.1.0 libedgetpu.so.1
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`
sudo LD_LIBRARY_PATH=./ ./main
# you may get "Segmentation fault (core dumped)" without sudo

# GPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=on  -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
# you may need `sudo apt install ocl-icd-opencl-dev` or `sudo apt install libgles2-mesa-dev`

# XNNPACK
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=on

# NNAPI (Note: You use Android for NNAPI. Therefore, you will modify CMakeLists.txt in Android Studio rather than the following command)
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_NNAPI=on

You also need to select framework when calling InferenceHelper::create .

EdgeTPU

NNAPI

By default, NNAPI will select the most appropreate accelerator for the model. You can specify which accelerator to use by yourself. Modify the following code in InferenceHelperTensorflowLite.cpp

// options.accelerator_name = "qti-default";
// options.accelerator_name = "qti-dsp";
// options.accelerator_name = "qti-gpu";

License

Acknowledgements

  • This project utilizes OSS (Open Source Software)
  • This project utilizes models from other projects:
    • Please find model_information.md in resource.zip