You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At present, when we do model inference in the mobile environment, hoping the paddle can be small enough. So, when compiling the paddle for the mobile environment (Android, IOS), we need to be able to crop the paddle modules, thereby reducing the size of the inference program.
Based on the previous survey #1845, we found several modules (like libpaddle_pserver.a, libpaddle_trainer_lib.a, libpaddle_api.a and so on) that were not related to inference, which took some volume in the final inference program. So, consider adding a PADDLE_INFERENCE switch to do module clipping at the compile time. At present, in some of the CMakeLists.txt files have been used WITH_C_API done something like that. Further work is replacing WITH_C_API with PADDLE_INFERENCE and needs to refine the CMakeLists.txt files for the modules clipping.
The text was updated successfully, but these errors were encountered:
At present, when we do model inference in the mobile environment, hoping the paddle can be small enough. So, when compiling the paddle for the mobile environment (Android, IOS), we need to be able to crop the paddle modules, thereby reducing the size of the inference program.
Based on the previous survey #1845, we found several modules (like libpaddle_pserver.a, libpaddle_trainer_lib.a, libpaddle_api.a and so on) that were not related to inference, which took some volume in the final inference program. So, consider adding a
PADDLE_INFERENCE
switch to do module clipping at the compile time. At present, in some of the CMakeLists.txt files have been usedWITH_C_API
done something like that. Further work is replacingWITH_C_API
withPADDLE_INFERENCE
and needs to refine the CMakeLists.txt files for the modules clipping.The text was updated successfully, but these errors were encountered: