Skip to content

celantur/SDKExample

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SDKExample

First setup

This is a simple example of how to use the Celantur SDK. The SDK is a library that allows you to easily integrate Celantur's anonymisation services into your app. To start, you will need to receive several things from Celantur:

  1. Model file
  2. Custom-made OpenCV debian package celantur-opencv.deb. This package is a modified version of OpenCV 4.7.0 that includes the necessary functions for the SDK to work.
  3. Celantur SDK library for C++, celantur-cpp-processing.deb
  4. Depending on your system you might also get a celantur-tensorrt.deb package. If you already have TensorRT installed on your system, you can skip this step.

Make sure that your system has CUDA installed. Celantur does not impose any specific version of CUDA, but there can be incompatibilities with different versions of TensorRT. Read more about this in the Dependencies.md.

Known dependency limitations

Installing dependencies

  1. Not necessary if not needed remove the python2 dependencies that might create conflicts with the SDK:
sudo apt remove python2* libpython2*
  1. Remove preinstalled OpenCV since we will use a custom version (read more about it Dependencies.md):
sudo apt-get remove libopencv*
  1. install the required repository dependencies:
sudo apt-get update && sudo apt install -y ffmpeg python3-dev cmake ninja-build libeigen3-dev libboost-all-dev
  1. Install the custom OpenCV package:
sudo apt install ./celantur-opencv*.deb
  1. Install the Celantur SDK:
sudo apt install ./celantur-cpp-processing*.deb

Compile

mkdir -p build && cd build
cmake -GNinja -DCMAKE_INSTALL_PREFIX=/path/to/install/ ..
ninja && ninja install

Possible issues

CppProcessing is not found: Add -DCppProcessing_DIR=/usr/local/lib/cmake to the cmake configuration.

cmake -GNinja -DCMAKE_INSTALL_PREFIX=/path/to/install/ -DCppProcessing_DIR=/usr/local/lib/cmake ..

Compile model

Before running the example, one needs to compile the model. The model that Celantur provides is in universal, hardware independent format .onnx. However, to achieve the best performance we use the inference engine of NVidia called TensorRT.

There are few parameters that may or may not be supported on all TensorRT versions:

  1. If the version is higher than 8.6, you can set builder optimisation level.
  2. If the version is higher than 10, dynamic model resolution will be supported.

To convert model from .onnx to .trt one need to compile it. Compiling the model in the context of Machine Learning means finding the best possible deployment on a given hardware and as such, must be performed on a target hardware. Model conversion needs to be done only once per hardware for any given set of parameters.

To compile the model, use the following cli command:

/usr/local/bin/create_trt_from_onnx path/to/onnx.onnx \
                                    path/to/tensorrt-output.trt \
                                    <precision> \
                                    <builder optimisation> \
                                    width=min:opt:max \
                                    height=min:opt:max

Before going over all parameters, we can provide some "safe" config which one can use as to quickly run the SDK example without setting it up too much:

/usr/local/bin/create_trt_from_onnx path/to/onnx.onnx \
                                    path/to/tensorrt-output.trt \
                                    FP32 \
                                    0 \
                                    width=1280:1280:1280 \
                                    height=1280:1280:1280

First parameter in the list, <precision> has a two options FP16 and FP32. It sets the recommended precision and performance of the resulting model. FP16 on average will be faster but less precise, FP32 will be more precise but less performant.

Second parameter, <builder optimisation>, denotes the optimisation of the resulting model. Can be values from 0 to 5. The higher it is, the longer the model compilation will take, but the faster the inference phase will be. If you have TensorRT version < 8.6 this parameter will not influence the result, but still needs to be provided.

Last two parameters denote minimum/optimum/maximum model resolution of dynamic model. We suggest to start with fixed resolution 1280:1280:1280 and figure out if different resolution is needed later. The larger the resolution is, the bigger image the model can process without loss of precision.

Won't work if your TensorRT version is less than 10. If it is the case, please always use 1280:1280:1280 since in the earlier TensorRT versions a bug exists, that will make model with the same input to output some gibberish.

Run the SDK example

Finally, you can use compiled model to anonymise any image:

cd path/to/install
./celantur_sdk_example test-img.jpg path/to/tensorrt-output.trt

Possible issues

If the following error appears: ./celantur_sdk_example: error while loading shared libraries: libprocessing.so: cannot open shared object file: No such file or directory Add the library path to the install celantur library:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/

Or edit the runpath

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published