Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation on how to use ONNX models #108

Open
Leftyx opened this issue Sep 10, 2024 · 10 comments
Open

Documentation on how to use ONNX models #108

Leftyx opened this issue Sep 10, 2024 · 10 comments

Comments

@Leftyx
Copy link

Leftyx commented Sep 10, 2024

Feature request type

sample request

Is your feature request related to a problem? Please describe

In the documentation there is always a reference to the Mkldnn usage but, apparently, the device also support ONNX.
I don't seem to be able to find any sample code or explanation on how to use it.
I've fiddled with the code replacing the device but it does not work.

Describe the solution you'd like

Some sample code and some documentation for the ONNX integration.

Describe alternatives you've considered

RapidOCR (c# integration) which provides a solution for ONNX models.

Additional context

No response

@sdcb
Copy link
Owner

sdcb commented Sep 10, 2024

@Leftyx
Copy link
Author

Leftyx commented Sep 10, 2024

you can refer to: https://github.com/sdcb/PaddleSharp?tab=readme-ov-file#paddle-devices

It does not help at all. Where should I put the ONNX models ? What bits and pieces of code should I change ?
A super-short example would be enough to guide us through.

Thanks

@sdcb
Copy link
Owner

sdcb commented Sep 10, 2024

No it does not supports read ONNX model, it supports read paddle model and convert into ONNX and then inference.

@Leftyx
Copy link
Author

Leftyx commented Sep 10, 2024

I don't quite understand.
I would expect the process to convert the models in ONNX and use those (ONNX) models.

@sdcb
Copy link
Owner

sdcb commented Sep 10, 2024

when you specify PaddleDevice.Onnx(), it will convert paddle model into ONNX model in memory, and then inference the ONNX model in memory.

@Leftyx
Copy link
Author

Leftyx commented Sep 10, 2024

I assume that would slow down the process quite a lot. Se there is no added value in using ONNX I can immagine.
And also it would mean I cannot use a local model !? The servers where the inference happens are offline.
Thanks for the explanation.

@sdcb
Copy link
Owner

sdcb commented Sep 10, 2024

It should be quite fast because I noticed in most of scenarios the speed is quite fast even compares to mkldnn

@Leftyx
Copy link
Author

Leftyx commented Sep 10, 2024

Correct me I am wrong. I would have to use the online models. Am I right ?
Anyway, product is very well engineered. Great job 😄

@n0099
Copy link
Contributor

n0099 commented Sep 10, 2024

And also it would mean I cannot use a local model !?

#105 #104 #84 #80 #62 #32

@sdcb May you pin some of these issues?

@sdcb
Copy link
Owner

sdcb commented Sep 10, 2024

Oh thank you @n0099 I missed these part, yeah sure @Leftyx you can use local models, try with FromDirectory method or use the Local package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants