Replies: 5 comments
-
@mxnet-label-bot add [build, Edge devices, Onnx, Question] Hi @nchafni , thanks for opening the issue. Adding some labels on this for better visibility. It'd be helpful if you added a few more details about the issue you're seeing. Also, I would also recommend you to send this question https://discuss.mxnet.io/ where you will get a wider audience. |
Beta Was this translation helpful? Give feedback.
-
thanks for the response @andrewfayres. I'll add more info here and post on https://discuss.mxnet.io/ I'm building on the jetson tx2. If i build it with use_tensort=1 I get the onnx error. I have built onnx in /3rdparty and installed and mapped it a few different ways. I'm wondering the mxnet tensorrt integration even works on the jetson since it doesn't support tensorrt python yet. |
Beta Was this translation helpful? Give feedback.
-
@KellenSunderland any inputs on this? |
Beta Was this translation helpful? Give feedback.
-
@vandanavk #13310 might take some iterating to get right, but this is also addressed in #12469 which is ready to be merged if someone can have a look. @nchafni: First of all at the moment ONNX's proto client needs to be generated before compiling. Make sure you're doing a # Build ONNX
pushd .
echo "Installing ONNX."
cd 3rdparty/onnx-tensorrt/third_party/onnx
rm -rf build
mkdir -p build
cd build
cmake \
-DCMAKE_CXX_FLAGS=-I/usr/include/python${PYVER}\
-DBUILD_SHARED_LIBS=ON ..\
-G Ninja
ninja -j 1 -v onnx/onnx.proto
ninja -j 1 -v
export LIBRARY_PATH=`pwd`:`pwd`/onnx/:$LIBRARY_PATH
export CPLUS_INCLUDE_PATH=`pwd`:$CPLUS_INCLUDE_PATH
popd
# Build ONNX-TensorRT
pushd .
cd 3rdparty/onnx-tensorrt/
mkdir -p build
cd build
cmake ..
make -j$(nproc)
export LIBRARY_PATH=`pwd`:$LIBRARY_PATH
popd before compilation. I'f you've already done that and the onnx.pb.h file is present you can try and add it to your cpp include path so the compiler can pick it up. You could try building using cmake (just add -DUSE_TENSORRT). |
Beta Was this translation helpful? Give feedback.
-
Out of curiosity would you also be able to let us know what version of Jetpack you're using? Would you be opposed to upgrading to a recent version? We're seeing some solid perf improvements with CUDA 10, TRT5 and CuDNN 7.3 so will likely update the versions used in master soon. My assumption is that most people using TRT are looking for performance and would likely prefer speed over backwards compatibility. |
Beta Was this translation helpful? Give feedback.
-
src/operator/contrib/./tensorrt-inl.h:41:26: fatal error: onnx/onnx.pb.h: No such file or directory
building on a jetson tx2
Beta Was this translation helpful? Give feedback.
All reactions