-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom ONNX model load problem #800
Comments
I'm having the same problem with a model I converted from PyTorch Yolov5. I get: [TRT] INVALID_ARGUMENT: Cannot find binding of given name: data I tried a few other options I had from other models like --input-blob=input_0 but without knowing where to look for this I wasn't sure. I am using the aarch64 Jetson Xavier NX. |
@murthax I am trying the same model Pytorch Yolov5. Also, my platform is aarch64 Jetson Xavier AGX. And I have found the input blob names and output names. However, when you export the model to onnx with export.py (from yolov5 source code), you will notice that there is only one output name which is 'output' since dry run will work from here Then, I assign them as output_names=['classes', 'boxes']. This way jetson detectNet can load the model but I have further issues. My current issue is that output binding doesn't seem to be correct which doesn't complain in the loading stage. It looks like following:
(I tweaked the input image size as well when I export the model.) Then, when I try to run detection with this model, I get an illegal memory access error in CUDA.
The first and last print are my debugging messages. |
I am having the same issue for jetsontx2 with YOLOv5. I get exactly same error. Any update would be helpful. When I change the |
@berkantay Could you check your model with neutron? It seems like detection stage is not included in the onnx model conversion. This might help.. I am encountering two more issues now.
into False.)
|
Hello @choi0330 when I check the model with neutron input layer is called |
I'm at a similar stage. I did use Netron to interpret the ONNX file. I see some outputs. I was able to move past the input-blob issue by using "images" but then on to the next ones (--input-blob=images --output-cvg=? --output-bbox=? If I use the outputs here ie. --input-blob=images --output-cvg=791 --output-bbox=771 the engine starts but I end up with memory errors: [TRT] engine.cpp (986) - Cuda Error in executeInternal: 700 (an illegal memory access was encountered) |
To support different object detection models in jetson-inference, you would need to add/modify the pre/post-processing code found here:
This should be made to match the pre/post-processing that gets performed on the original model. Since you are using PyTorch, you might also want to try the torch2trt project - https://github.com/nvidia-ai-iot/torch2trt |
Thanks for the answer. I would love to utilize Jentson-inference detectNet model, so I'll try to match the pre/post-processing of the original model for now. |
If you find out the solution it would be nice for us @choi0330. I will be also updating this issue in case of good updates. |
Hello everyone when I change the command with I see no memory warnings, network works i think. But I get errors from gstreamer unsupported image format.
Note:I Get this error mesage on every frame,sequentially |
Here I found some hint on how to export the model with the detection layer: ultralytics/yolov5#708 (comment) However, the default TensorRT doesn't support a Scatter ND operation which is in the detection layer. So, one possible option would be to implement the post-processing using three outputs (https://github.com/ultralytics/yolov5/blob/master/models/yolo.py#L52-L59) and NMS. @murthax I share my current update. |
Hi @choi0330 @berkantay, |
I wish I had better news but I was not able to move forward here. I've been using Yolov5 directly on my Jetson Xavier NX. The FPS isn't as good but at least I'm able to move forward with my project. |
Unfortunately I stopped my work on jetson-inference and implemented yolov5 itself without jetson-inference library on jetson tx2. |
Ok thanks for your responses! |
I went a bit further to implement detection layers in Jetson-inference library but it didn't work out. I guess it will be best to wait for the plugin of Scatter ND operation in TensorRT to run yolov5 with Jetson-inference. |
It looks like there are some TensorRT YOLOv5 projects out there: https://github.com/SeanAvery/yolov5-tensorrt |
Operator |
Hello dusty! I am very much enjoying your tutorials and grateful for it.
I've looked through the issues but I couldn't find a relevant issue.
I am currently trying to load .onnx model (which is not from one of your tutorials) with detectNet on my custom C++ code. (jetson-inference is correctly included)
Using C++ detecNet class, I create a net :
net = detectNet::Create(NULL, "path/to/model.onnx", 0.0f, "path/to/labels.txt");
.I intend to use the following Create function from source code:
The error occurs when the internal code is trying to read input_blobs.
I noticed that the default argument for
input_blob
doesn't work for external .onnx model.Should I provide a correct input_blob argument to load the model? And how can I know this information?
Most of the conversion examples (anymodel to .onnx) don't provide this information, so I need some help on this with jetson-inference.
Looking forward to your ideas! Thanks :)
The text was updated successfully, but these errors were encountered: