-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TRT export #6157
Comments
The onnxruntime version is determined by your mmcv version. With the latest mmcv, you should install onnxruntime>=1.8.1. But if you are using old version of pytorch and cuda, I'm afraid that you need to use old version of mmcv which support lower version of onnxruntime. |
@RunningLeon Please take a look at this issue. |
Got the same issue for converting point_rend model to trt. 😢 |
same issue, any suggestions? |
@fbagci @neverrop Hi, can you post your error messages and outputs from |
@RunningLeon thanks for response, the output for
Also:
However, |
@fbagci Hi, you may need to add one env variable with pytorch2onnx.py tool: https://github.com/open-mmlab/mmdetection/blob/master/docs/en/tutorials/pytorch2onnx.md#description-of-all-arguments-1 |
@neverrop Setting env var does not solve problem. So, i tried to use mmdeploy. Since the local installations are tricky and open to error for library version inconsistencies, I used dockerfile in PR. This did not worked as well. The deploy.py converted torch model to onnx successfully but failed while converting onnx to tensorrt. Since PR is not merged there may be some missing parts. My usage scenario is;
output is
I suspect that cuda version in container is incorrect. Cuda version argument in dockerfile is set to 10.2 by default but in container
|
Hi, @fbagci. The |
Hello,
I swear I tried everything to match cuda+cudnn+onnx+trt+pytorch versions in order to build both onnx and trt plugins and be able to convert a maskrcnn model to trt, but no luck so far.
Docs are a bit contradictory in this respect, e.g. these pages:
For instance: onnxruntime 1.8.1 installed from pip assumes cuda 11.0, but for PyTorch 1.6.0 there is no cuda 11.0 build (PyTorch team introduced it only up from 1.7.0).
So far, I generally ended up with the infamous 'UNSUPPORTED_NODE: Assertion failed: axis >= 0 && axis < nbDims' error with pretty much anything I tried - like these guys: #5639 and #5108.
Please kindly let me know what combination works for sure with the latest 2.16.0 version. I have a guess, which I will try now :) I also promise I will make a docker image for the community that other folks don't pull their hair out.
CUDA version: ???
CuDNN version: ???
PyTorch version: ???
ONNX version: ???
ONNX runtime version: ???
TRT version: ???
Thanks a lot, Balint
The text was updated successfully, but these errors were encountered: