-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inputs format misunderstood. (prepare_input) (AIV-743) #193
Comments
@nicklasb Can you share the onnx model file? |
@nicklasb |
So I think that I have fixes the input shape, however, now I encounter an issue, ReduceMin is not implemented in ESP-DL. |
Or maybe there is an easier way, how did ESP-DL implement the pedestrian_detect model? |
@100312dog Ok. Unsure what you mean, was it some paddle project that generated the model then i suppose (the name looks like it). Not sure what you mean with "the unnecessary part"? And how do I remove it? |
If you are using the offical paddledetection project, use this command to export the model.
After exporting the model using the above cmd, run these command to convert it into onnx.
Then the model seems like:
|
@100312dog This is great information, thank you! This should solve it for me! |
Checklist
Issue or Suggestion Description
I am getting an error when I am quantizing an (working, at least I can infer successfully in PaddleDetection) ONNX model.
It is a PaddleDetection model that has been exported to ONNX using paddle2onnx:
PicoDet:
backbone: LCNet
neck: LCPAN
head: PicoHeadV2
..config, basically pedestrian_detect-model (if I understood that lineage properly), that is trained towards other stuff.
But when I am running variant of quantize_torch_model.py, that basically only loads other images, the rest is the same.
I printed the output of inputs and inputs_dictionary, because basically this seems like it is just the completely wrong data in the wrong place or something.
The text was updated successfully, but these errors were encountered: