You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 24, 2024. It is now read-only.
Based on the previous survey, various inference frameworks have a phase that transforms a training model into an inference model. Like Compilation in AndroidNN, Conversion in CoreML, Build in TensorRT, Converter in Tensorflow-Lite. Paddle Mobile also needs a compilation tool that transforms the training model into an inference model.
This compilation tool needs to be able to support the following features:
Compile Paddle's training config and parameter files into one inference file.
Support Rounding-based parameter compression.
Supports model optimization for Merge Batch Normalization.
Support float32 to float16 parameter compression optimization.
Support float32 to unit8 parameter compression optimization.
The text was updated successfully, but these errors were encountered:
I made a change to merge_model script. For we need model config(.py) instead of model prototxt, thus a tool likes converter --model_config mobilenet.py --model_parameters mobilenet.tar.gz --with_rounding True --merge_batch_normalization True is not easy to do. I modify the original merge_v2_model interface to this:
#!/usr/bin/env python
# coding=utf-8
from merge_model import merge_v2_model
#from enet import Enet
from mobilenet_with_bn import mobile_net
out = mobile_net(3 * 224 * 224, 102)
merge_v2_model(out,
'./mobilenet_flowers102.tar.gz',
'model.paddle',
with_rounding=True,
merge_batch_normazlization=True)
The result model.paddle with rounding and merge_batchnormalization. Users do not have to configure the model config(.py) without BN. Several tests have been carried out to verify the correctness of the script.
Based on the previous survey, various inference frameworks have a phase that transforms a training model into an inference model. Like
Compilation
in AndroidNN,Conversion
in CoreML,Build
in TensorRT,Converter
in Tensorflow-Lite. Paddle Mobile also needs a compilation tool that transforms the training model into an inference model.This compilation tool needs to be able to support the following features:
The text was updated successfully, but these errors were encountered: