Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

initial tensorrt ep commit #921

Merged
merged 13 commits into from
Jun 6, 2024
Merged

initial tensorrt ep commit #921

merged 13 commits into from
Jun 6, 2024

Conversation

manickavela29
Copy link
Contributor

@manickavela29 manickavela29 commented May 26, 2024

  • Adds in TensorRT EP
  • Observed that Encoder Model is much faster with TRT than with CUDA backend
  • Initialization of onnxrt session takes more time as TRT models are generated and converted

ToDo :

  • Validate Accuracy of the model inference
  • Reducing the starting time

Ref : #40, #41
CC : @csukuangfj

Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
@manickavela29 manickavela29 marked this pull request as draft June 3, 2024 09:52
-- cleaner implementation
-- releasing memory leak

Signed-off-by: manickavela1998@gmail.com <manickavela.arumugam@uniphore.com>
@manickavela29 manickavela29 changed the title initial tensorrt commit initial tensorrt ep commit Jun 3, 2024
@manickavela29 manickavela29 marked this pull request as ready for review June 3, 2024 10:27
@manickavela29
Copy link
Contributor Author

I will send a separate PR for handling configs of OnnxRT EP configs separately.
along with TRT config options.

In the middle of something, and it might take sometime

Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
@manickavela29
Copy link
Contributor Author

I think the build failures are coming in from the python dependency of nvinfer and similar, I will try to add some libraries and check

@csukuangfj
Copy link
Collaborator

https://github.com/k2-fsa/sherpa-onnx/actions/runs/9354678305/job/25748084556#step:5:1121

[ 62%] Linking CXX executable ../../bin/sherpa-onnx
/opt/rh/devtoolset-10/root/usr/libexec/gcc/x86_64-redhat-linux/10/ld: ../../lib/libsherpa-onnx-core.so: undefined reference to `OrtSessionOptionsAppendExecutionProvider_Tensorrt'
collect2: error: ld returned 1 exit status
make[2]: *** [bin/sherpa-onnx] Error 1
make[1]: *** [sherpa-onnx/csrc/CMakeFiles/sherpa-onnx.dir/all] Error 2
make: *** [all] Error 2

I think an extra lib should be linked to for tensorrt.

Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
@manickavela29
Copy link
Contributor Author

yes, it seems directly having OrtSessionOptionsAppendExecutionProvider_Tensorrt() as in interface is not working out
and AppendExecutionProvider_TensorRT_V2 is actually a wrapper around that function.

so it should be alright now

sherpa-onnx/csrc/session.cc Outdated Show resolved Hide resolved
sherpa-onnx/csrc/session.cc Outdated Show resolved Hide resolved
sherpa-onnx/csrc/session.cc Outdated Show resolved Hide resolved
@csukuangfj
Copy link
Collaborator

Please merge the latest master into your branch and the CI should pass or you can just ignore the failed tests.

Could you describe what users need to do to build sherpa-onnx with TensorRT support?

@csukuangfj
Copy link
Collaborator

please leave a comment if you think it is ready to merge.

manickavela29 and others added 3 commits June 5, 2024 08:37
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
@manickavela29
Copy link
Contributor Author

manickavela29 commented Jun 5, 2024

Regarding what is required to run TensorRT,

Hardware : Nvidia GPU 😄 ,(not sure how compatible AMD GPUs are)
Software lib : TensorRT requires libnvinfer, libnvinfer-dispatch, libnvinfer-plugin, libnvonnxparsers8 (from my experience)

But seeing that CI/CD's are working fine, I think onnxrt is able to handle these,
as long as someone is having a compatible GPU it should work fine.

Just setting the appropriate provider argument should be enough,
if there are any github pages/doc, this can be updated there

--provider=trt

Signed-off-by: manickavela1998@gmail.com <manickavela1998@gmail.com>
@manickavela29
Copy link
Contributor Author

It can be merged,
after ensuring that basic builds are working fine

Thank you for the review 😄

@manickavela29
Copy link
Contributor Author

manickavela29 commented Jun 5, 2024

Regarding what is required to run TensorRT,

Hardware : Nvidia GPU 😄 ,(not sure how compatible AMD GPUs are) Software lib : TensorRT requires libnvinfer, libnvinfer-dispatch, libnvinfer-plugin, libnvonnxparsers8 (from my experience)

But seeing that CI/CD's are working fine, I think onnxrt is able to handle these, as long as someone is having a compatible GPU it should work fine.

Just setting the appropriate provider argument should be enough, if there are any github pages/doc, this can be updated there

--provider=trt

Update this comment, by mistake I had typed ' don't think onnxrt' but edited it

@manickavela29
Copy link
Contributor Author

Please merge the latest master into your branch and the CI should pass or you can just ignore the failed tests.

Could you describe what users need to do to build sherpa-onnx with TensorRT support

I think the builds are also good enough.
Shall we merge it @csukuangfj

@csukuangfj
Copy link
Collaborator

Thank you for your contribution!

@csukuangfj csukuangfj merged commit 69347ff into k2-fsa:master Jun 6, 2024
180 of 209 checks passed
@manickavela29 manickavela29 deleted the trt branch June 6, 2024 04:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants