Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deploy it on the mobile device? #12

Open
ElegantLin opened this issue Jun 10, 2022 · 3 comments
Open

How to deploy it on the mobile device? #12

ElegantLin opened this issue Jun 10, 2022 · 3 comments

Comments

@ElegantLin
Copy link

No description provided.

@Seanseattle
Copy link
Owner

Hello, you can try Paddle Lite for deployment. This can speed up more than 19 FPS by running on the GPU of a mobile phone.

@ElegantLin
Copy link
Author

Do you plan to release the code of deploying the model on the mobile device?

@ElegantLin
Copy link
Author

ElegantLin commented Jul 30, 2022

Hello, you can try Paddle Lite for deployment. This can speed up more than 19 FPS by running on the GPU of a mobile phone.

Hi, do you know how to convert the pdparam to Paddle Lite model? I think I have to use opt tools to do the conversion. Thus I should convert the pdparam to inference model. I went through this link. However, I don't know how to do the conversion if the model is a special ResNet. Could you please provide more information?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants