Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARM ML embedded evaluation Kit support #7423

Open
dinusha94 opened this issue Dec 22, 2024 · 7 comments
Open

ARM ML embedded evaluation Kit support #7423

dinusha94 opened this issue Dec 22, 2024 · 7 comments
Assignees
Labels
partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@dinusha94
Copy link

Hi I am new to using executorch

Is there a way I can use executorch with the ARM ML embedded evaluation Kit? It already uses tenserflow lite micro. does anyone has some guidelines to add executorch to ARM ML embedded evaluation Kit and use it through ExecuTorch Module Extension in C++.

Thanks

@Jack-Khuu Jack-Khuu added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Dec 23, 2024
@Jack-Khuu
Copy link
Contributor

cc: @digantdesai @mcr229 On running ET with ARM

@zingo zingo added the partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm label Dec 23, 2024
@zingo
Copy link
Collaborator

zingo commented Dec 23, 2024

Hi It depends a bit on what CPU you want to support, is it something with a Ethos-U or a pure Cortex-M CPU?

For Ethos-U, work has started and is ongoing here:

For a Cortex-M only system there is almost no work yet, there is some info here for way to start to build it, but not yet working:
#7177

Currently a design to use fallback CPU code that is a bit more optimized then the standard ones in Executorch would be nice, and maybe also 8bit quantized version instead of float version would speed up things.
So some sort of design too figure out a god way to do this is needed and input/help/sugestions/patches are welcommed to get this going.

@dinusha94
Copy link
Author

Thanks for the information and clarification, @zingo. I am trying to use it with cortex-M55 and Ethus-U55 NPU. I am new to CMake and build stuff. According to #7177, it seems like we can use it with my setup.

Instead of running in the bare-metal as shown in the https://github.com/pytorch/executorch/tree/main/examples/arm, can I use it with c++ module in the ARM platform: https://pytorch.org/executorch/stable/extension-module.html. if so can you give me some hints on how to build it in ARM embedded system.

I am familiar with the ARM ML embedded evaluation Kit. there we use cmake and make to build project.

place excuse for my limited knowledge on these topics.

@zingo
Copy link
Collaborator

zingo commented Dec 24, 2024

Thats great, Cortex-M55/Ethos U55 is working more out of the box and #7177 should not be needed , but some changes from it like config/link files might be good (see comment later).

  1. In another/existing project
    The project currently adding more and more support for ops running on the Ethos-U55/85 and is currently setup to build and run models on a corstone-300/corstone-320 system. But there is not that much special stuff going on in the used ./examples/arm/run.sh mostly just a build script and Executorch can probably be used in a ARM ML embedded evaluation Kit setup also.

When I did a fast try with something similar a few month back I just used run.sh in the project to build the .pte file and all libs. I then pointed them out (or you can copied them over) and included them the other project (c)magefiles and incorporated/edited the file examples/arm/executor_runner/arm_executor_runner.cpp into the same project and linked it with the libs generated from run.sh that it complained about and added include pathes when it complained and got something running in the end.

e.g. first just run

./examples/arm/setup.sh --i-agree-to-the-contained-eula
./examples/arm/run.sh --target=ethos-u55-128 --model_name=mul

Then use the .a files and generated model_pte.h (this is the pte file as a big memory blob) in your project

  1. Adding support in this project
    As the Ethos-U55 toolchain is added in this project (Executorch) you should also probably get something running by just changing the linkerfile memory regions and the startup code to match your target. This is where getting inspired by the fork in Issues with deloyment on RP2040 #7177 e.g. main...AIWintermuteAI:executorch:port-to-rp2040
    might come handy. E.g. If you run

./examples/arm/setup.sh --i-agree-to-the-contained-eula

Most/all? needed stuff should be there (except maybe for special flashing tools) and chaining the used linker .ld file and and some special HW specific setup code to examples/arm/executor_runner/arm_executor_runner.cpp and then run

./examples/arm/run.sh --target=ethos-u55-128 --model_name=mul

to build, this might work for you.

Even if we try to keep the example setup/flow a bit restricted by only supporting corstone-3x0 (as this can easily be tested with the FVP "simulators" and we currently try to not fan out in a lot of targets) it still means that If you get something working and we know what is really needed it would be really great if we could figure out a way to make this smoother if possible and patches/discussions to improve problems around this would be great. Thanks for playing with this.

I hope this will get you further and solve some problems for you.

@dinusha94
Copy link
Author

Thanks @zingo I will try these

@dinusha94
Copy link
Author

Hi @zingo, I am having issues while linking the libraries to my project as you showed in the 1.In another/existing project, so I did the following steps ( by googling :) ) to use Module Extention in my project.

  1. Build the executorch by following https://pytorch.org/executorch/stable/getting-started-setup.html
  2. go to the cmake-out folder and install it using make && sudo make install

3.use the following cmake file in my project

cmake_minimum_required(VERSION 3.19)
project(SampleProject)

# Specify C++ standard
set(CMAKE_CXX_STANDARD 17)

# Find and include ExecuTorch package (use executorch-config.cmake)
find_package(executorch REQUIRED CONFIG PATHS /usr/local/lib/cmake/ExecuTorch NO_DEFAULT_PATH)

# Check if ExecuTorch was found
if (EXECUTORCH_FOUND)
    message(STATUS "ExecuTorch found")
else()
    message(FATAL_ERROR "ExecuTorch not found")
endif()

# Include directories for ExecuTorch
include_directories(${EXECUTORCH_INCLUDE_DIRS})

# Add your source files
add_executable(SimpleApp src/main.cpp)

# Link libraries
target_link_libraries(SimpleApp ${EXECUTORCH_LIBRARIES})
  1. my main.cpp
#include <executorch/extension/module/module.h>
#include <executorch/extension/tensor/tensor.h>

using namespace ::executorch::extension;

// Create a Module.
Module module("../models/mv2.pte");

when I try to make it I get errors, which seems to be related to linking

main.cpp:(.text._ZN10executorch7runtime8internal4logfENS0_8LogLevelEmPKcS4_mS4_z[_ZN10executorch7runtime8internal4logfENS0_8LogLevelEmPKcS4_mS4_z]+0xd6): undefined reference to executorch::runtime::internal::vlogf(executorch::runtime::LogLevel, unsigned long, char const*, char const*, unsigned long, char const*, __va_list_tag*)'
/usr/bin/ld: CMakeFiles/SimpleApp.dir/src/main.cpp.o: in function executorch::runtime::EValue::toTensor() &':
main.cpp:(.text._ZNR10executorch7runtime6EValue8toTensorEv[_ZNR10executorch7runtime6EValue8toTensorEv]+0x28): undefined reference to executorch::runtime::internal::get_log_timestamp()'
/usr/bin/ld: main.cpp:(.text._ZNR10executorch7runtime6EValue8toTensorEv[_ZNR10executorch7runtime6EValue8toTensorEv]+0x76): undefined reference to executorch::runtime::runtime_abort()'
/usr/bin/ld: CMakeFiles/SimpleApp.dir/src/main.cpp.o: in function executorch::runtime::EValue::toTensor() const &':

Could you please identify what I am doing wrong here, or can you provide how did you use those .a libraries in you're project

Thanks
Dinusha

@zingo
Copy link
Collaborator

zingo commented Dec 27, 2024

Hi maybe you missed one or some lib files or .o files in your EXECUTORCH_LIBRARIES ? e.g. the lib/files where the functions below seems missing and is probably not picked up.

executorch::runtime::internal::vlogf
executorch::runtime::internal::get_log_timestamp()'
executorch::runtime::runtime_abort()

I'm not at my computer right now so I don't know what files they end up it and only looked at you msg above. But if you search for vlogf/get_log_timestamp/runtime_abort in the project you probably get some clues, there are also tools to peek into the .a/.o files and list symbols that you can used maybe nm could work. I think I ended up using one of those. Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants