Inference with trt plugin in C++. #1409
Unanswered
SirojbekSafarov
asked this question in
Q&A
Replies: 1 comment
-
What's the pixel format of input mats? If it is BGR, I think branches |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, Thank you for the great work.
My question might be a little off-topic.
I am trying to do inference trt model with custom plugins. I get custom plugin source code from your source code. When I do inference everything is working fine but the results are wrong. I also do inference with another trt model (which does not include trt plugin) it is working well and the performance also good. I also convert trt plugins to .dll file and do inference with python it is also working. I used trtexec with loading trt plugin it is also working and the performance is good. I just want to know do we need to do any extra steps if we want to use trt plugins in c++.
bool EngineLoader::runInference(const std::vectorcv::Mat& inputBatchImages, std::vector<std::vector>& featureVectors) {
auto dims = m_engine->getBindingDimensions(0);
}
This is my code. If you have any suggestions please give me. I am just stuck here I do not know what to do.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions