Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Target c10 directly on Android #21

Open
Hixie opened this issue Jan 27, 2023 · 19 comments
Open

Target c10 directly on Android #21

Hixie opened this issue Jan 27, 2023 · 19 comments

Comments

@Hixie
Copy link

Hixie commented Jan 27, 2023

Currently this package on Android goes from Dart to Java via message channels, then from Java to C++ glue code via JNI, then from that C++ to the actual core pytorch library (and then all the way back). It would be more efficient if the Dart code used FFI to directly drive the C++ code.

@cyrillkuettel
Copy link

I think this is a great idea. It would then be possible to use a lot of the same code as on iOS. Unfortunately there are not a lot of resources for using pytorch C++ api combined with Android.

@dvagala
Copy link

dvagala commented Mar 5, 2023

I've been trying this the past few days, compiling Pytorch for android C++ and linking them in cmake, but with no luck. I couldn't get past a undefined reference errors. If anyone had successfully managed to do it please tag me

@cyrillkuettel
Copy link

@dvagala Did you try NativeApp from the official pytorch demo?

@dvagala
Copy link

dvagala commented Mar 6, 2023

@cyrillkuettel Thank you! This was the right direction, I've tried it now, tweaked it a bit and it's working ❤️
My problem was that I was trying to follow this tutorial to build PyTorch from source to get the .a libraries and link them in CMakeLists.txt, but I was hitting a dead end.

For anyone with the same issue, here is how I managed to add native C++ LibTorch to FFI Android/Flutter:

Details
  • download the .aar pytorch library from here
    • unzip it, and copy .so libraries and headers from jni/ and headers/ somewhere to your project (I put mine to libtorch-android/lib and libtorch-android/include)
  • add this to your CMakeLists.txt
cmake_minimum_required(VERSION 3.4.1)
project(native_image_processor)

set(CMAKE_CXX_STANDARD 14) # needed for libtorch
set(LIBTORCH_BASE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../libtorch-android")
file(GLOB PYTORCH_INCLUDE_DIRS "${LIBTORCH_BASE_DIR}/include")
set(LIBTORCH_LIB_DIR "${LIBTORCH_BASE_DIR}/lib/${ANDROID_ABI}")

add_library(libcplusplus SHARED IMPORTED)
set_target_properties(libcplusplus PROPERTIES IMPORTED_LOCATION ${LIBTORCH_LIB_DIR}/libc++_shared.so)
add_library(libfbjni SHARED IMPORTED)
set_target_properties(libfbjni PROPERTIES IMPORTED_LOCATION ${LIBTORCH_LIB_DIR}/libfbjni.so)
add_library(libpytorch_jni SHARED IMPORTED)
set_target_properties(libpytorch_jni PROPERTIES IMPORTED_LOCATION ${LIBTORCH_LIB_DIR}/libpytorch_jni.so)

target_include_directories(native_image_processor PRIVATE
        ${PYTORCH_INCLUDE_DIRS})
target_link_libraries(native_image_processor
        libcplusplus
        libfbjni
        libpytorch_jni)

Some benchmarks - FFI vs Current version of the package

The inference is significantly faster through FFI. Moreover, with bigger input tensors, I was getting an OutOfMemoryError before, and now with the FFI it's totally fine.

The current version of the package:
Input shape (1, 3, 350, 350) - inference 886ms
Input shape (1, 3, 700, 700) - inference 3050ms
Input shape (1, 3, 2000, 2000) - inference OutOfMemoryError
Input shape (1, 3, 8750, 8750) - inference OutOfMemoryError

FFI:
Input shape (1, 3, 350, 350) - inference 250ms
Input shape (1, 3, 700, 700) - inference 314ms
Input shape (1, 3, 2000, 2000) - inference 580ms
Input shape (1, 3, 8750, 8750) - inference 12114ms

(I measured the time to make the model inference from Dart List, and get the output to Dart List, so the conversions to/from C++ data structures are already included in the measures. Testing ML model was LDC)

Please note, that I don't currently have time to rewrite this package and make a PR.

@cyrillkuettel
Copy link

@dvagala Big if true! Wow! That are some drastic performance improvements. That's amazing I have to admit I did not expect that much of a gain. It seems that the platform/method channel was actually really slow. How did you solve it, did you do the per-processing in C++ as well?
Can you share the code?

As a side note, your C++ might be suitable for iOS as well. (Since iOS is largely based on ObjectiveC) We might even be able to get rid of platform channels entirely.

@dvagala
Copy link

dvagala commented Mar 7, 2023

@cyrillkuettel Sure! I've made an example project here

Yes, it's running on both Android and iOS, and all the code is shared. That's the beauty of Flutter :)

@beroso
Copy link

beroso commented Jun 22, 2023

@dvagala I was trying to follow your steps but using the flutter create --template=plugin_ffi command.
I couldn't make it work on iOS. Do you have any experience with it?

Update: It's ok now, I forgot to copy the .ccp file to Runner target.

Another question: It's possible to make a ffi plugin with s.static_framework = true in podspec? The workaround makes it difficult to distribute the plugin. If not, there is a way to automate the file copy after Pods install phase?

@abdelaziz-mahdy
Copy link

abdelaziz-mahdy commented Sep 12, 2023

@beroso
did you find a fix for this?

@cyrillkuettel
Copy link

@beroso Hi, I wanted to ask if you eventually did find a solution for iOS C++ files?

I have the time and motivation to rewrite this package to dart:ffi. But I need to find a way to compile / include the C++ files. Right now s.static_framework = true works for development purposes. But I need to add each file manually to Xcode, and we don't want users of the plugin to have to do that...

@abdelaziz-mahdy
Copy link

Hello, I did suffer from the same problem in my package, but the solutions mentioned to me were to create static lib with pytorch and the c++ ffi code , and use that in the iOS pod

Sadly my knowledge with iOS is very low, so I failed to do so, and reverted back to objc and java.

I Hope this information helps you.

@cyrillkuettel
Copy link

Hello, thanks for the information. I believe I have found the solution to this problem. I did not yet test it, but it looks promising

Step 2: Building and bundling native code

  plugin:
    platforms:
      some_platform:
        ffiPlugin: true  # if we set this flutter will bundle the C++ files

This configuration invokes the native build for the various target platforms and bundles the binaries in Flutter applications using these FFI plugins.

@abdelaziz-mahdy
Copy link

I believe I did try that, but anyway if it worked for you let me know, since ffi was more stable and more predictable and I wish to go back to it

@cyrillkuettel
Copy link

Yes, stable, and you don't have duplicated inference code for each host platform. Also there are some neat tricks which I found in https://github.com/ValYouW/flutter-opencv-stream-processing so that with dart:ffi we can actually have true shared memory solution. So one could imagine writing the input image to a buffer and reading it from the other side in C++.

@abdelaziz-mahdy
Copy link

abdelaziz-mahdy commented Nov 23, 2023

Yes I did that, too and using the repo as a reference 😅.

The latest commit using ffi I could find in case that helps you
abdelaziz-mahdy/pytorch_lite@b047eda

@cyrillkuettel
Copy link

cyrillkuettel commented Nov 23, 2023

Fascinating. I have done a lot to same things in my own project (not open source yet, because I need to fix this distribution problem and clean things up)

Only just today I discovered your package 🙃

@abdelaziz-mahdy
Copy link

From what I know The only features I don't provide that exist in your package are the inference as a direct list of values

I have inference for yolov5 and yolov8 and used pigeon to make sure all null safe communications and async operations

@cyrillkuettel
Copy link

I'm not the owner of this project :)

@abdelaziz-mahdy
Copy link

I'm not the owner of this project :)

Oh my bad😂 I didn't notice, if you were able to do it let me know, and if you want to make a pr I would love that

@abdelaziz-mahdy
Copy link

@cyrillkuettel abdelaziz-mahdy/pytorch_lite#65 i made this pr on the latest commit for ffi just to extract it from history incase you need to check it

also included the points needed to be fixed for the ffi to be used.
https://github.com/zezo357/pytorch_lite/tree/latest-ffi is the branch with last edits for ffi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants