-
Notifications
You must be signed in to change notification settings - Fork 424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
initial tensorrt ep commit #921
Conversation
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
-- cleaner implementation -- releasing memory leak Signed-off-by: manickavela1998@gmail.com <manickavela.arumugam@uniphore.com>
I will send a separate PR for handling configs of OnnxRT EP configs separately. In the middle of something, and it might take sometime |
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
I think the build failures are coming in from the python dependency of nvinfer and similar, I will try to add some libraries and check |
I think an extra lib should be linked to for tensorrt. |
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
yes, it seems directly having OrtSessionOptionsAppendExecutionProvider_Tensorrt() as in interface is not working out so it should be alright now |
Please merge the latest master into your branch and the CI should pass or you can just ignore the failed tests. Could you describe what users need to do to build sherpa-onnx with TensorRT support? |
please leave a comment if you think it is ready to merge. |
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
Regarding what is required to run TensorRT, Hardware : Nvidia GPU 😄 ,(not sure how compatible AMD GPUs are) But seeing that CI/CD's are working fine, I think onnxrt is able to handle these, Just setting the appropriate provider argument should be enough, --provider=trt |
It can be merged, Thank you for the review 😄 |
Update this comment, by mistake I had typed ' don't think onnxrt' but edited it |
I think the builds are also good enough. |
Thank you for your contribution! |
ToDo :
Ref : #40, #41
CC : @csukuangfj