-
Notifications
You must be signed in to change notification settings - Fork 364
❓ [Question] How do you compile for Jetson 5.0? #1600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I also also tried to compile it manually with this instruction from the first page:
``
But still I get this error:
|
@arnaghizadeh NVIDIA has started providing PyTorch builds for Jetson based on the NGC container builds. So I would recommend checking out the corresponding branch from Torch-TensorRT then following the aarch64 build process. @peri044 for viz/more details |
@narendasan thanks for the replay, you mean I should follow this one? https://github.com/pytorch/TensorRT/tree/ngc_22.07, I followed its first instruction, still nothing is changed for me, I used
|
Another trial I used
I changed the cuda path to the correct one:
And then
|
The error with the exact same process for
|
@peri044 can you provide guidance here? |
@arnaghizadeh The error you see with Can you try compiling using the following libraries:
|
@peri044, still I get compiler error, please check the things that I have done:
install the pytorch from NVIDIA page https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html:
change the files properly:
change the cuda path from 11.3 to the correct one:
and then compile:
|
@peri044 it seems release/ngc/22.07 originally is prepared for Cuda 11.3 but Jetson defaults on 11.4 maybe simply changing cuda 11.3 to 11.4 in the WORKSPACE wont work. But I'm not sure why it is asking for c10_cuda, it seems it is looking for Cuda 10 and not Cuda 11.3? |
@peri044 what happened any news? |
I am also having the same issues and I have followed the instructions from here: https://pytorch.org/TensorRT/tutorials/installation.html. I have also pulled the latest dockerfile for Jetson from NGC: nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3. I do not see torch-tensorrt installed in that container though. I did a pip freeze and it did not show up. Is there another location in that container? Is that the correct container? |
Is this link supposed to be reachable: https://github.com/pytorch/TensorRT/releases/tag/v1.2.0a0.nv22.07? I am not able to get to that release which appears to have the pre-built binary for the Jetson. |
I was able to finally get this to build by checking out this commit 07ff244. I would like to move to the newer version but this will work for the older version on the Jetson Orin. |
@cspells interesting, may I know the exact steps you took for compiling? |
@cspells with the compiling method that I used above, I tried to compile but I get this error, maybe you have cuda 11.3 installed in your device and not 11.4?
change the cuda path from 11.3 to the correct one:
and then compile
|
@arnaghizadeh yeah so first I pulled this commit 07ff244
Then I made the changes to the WORKSPACE file. We have CUDA 11.4 on the Jetson Orin like you mentioned above. I installed pytorch as a user named
It looks like yours built but is having linking issues. Did you add to your LD_LIBRARY_PATH in the same terminal that you are building from: I also add the
Unfortunately I am still running into problems with the python install. I am using this command:
So I am looking at how to fix that or hoping we can get the docker image with torch_tensorrt already installed for the Jetson. |
With export
The
|
Since Jetpack 5.1 got released with TRT 8.5 recently, I'm trying to compile Torch-TRT release/ngc/22.12 branch with Jetpack 5.1. I was able to install and compile. We have a couple of infra related issues and need to test it before it gets out. |
@arnaghizadeh Did you install the pytorch from https://developer.download.nvidia.com/compute/redist/jp/v502/pytorch/ and configure the torch path in WORKSPACE to the install location and then run |
@peri044 thanks that would be a life saver, I used the following commad before here #1600 (comment) which should give me the correct version based on https://developer.download.nvidia.com/compute/redist/jp/v502/pytorch/ :
But still I get the errors that I posted afterwardds. One point is that I changed my home directory to an internal sd card to get more space, I'm thinking that maybe thats the problem? Becase unlike me @cspells at least could compile the C++ code but mine had some linking issues that I couldn't figure out how to solve. |
@arnaghizadeh Did you make sure you're pointing your WORKSPACE torch to the installed torch as follows ?
The |
@peri044 Finally it compiled. I commented out these lines:
Commented in these lines:
Changed the path to my correct Next I did
|
Just in case, I also tried it with following commands:
until here the compilation is successful. Nexy, I tried to add
And got the following (same) error:
|
@peri044 any news? |
@peri044 any update? |
I managed to compile torch_tensorrt on a Jetson Orin NX 16 GB with Jetpack 5.1 by following the instructions here:
I got the same errors as in this thread until I copied the appropriate WORKSPACE file from the toolchain folder, as described in the instructions. |
@janblumenkamp could you get a python .whl file also? |
Yes, both compiling and building the wheel works - I am running torch_tensorrt in python on the Jetson now! :) [Edited:] Just a quick update, I was able to compile and install it, but when running it, I get some errors that indicate that something might be wrong with the interaction with CUDA. I'll keep looking into this and will keep you updated. |
Since I managed to compile but have troubles running, I created a related issue #1891. |
@peri044 I am having a similar issue, initially as @arnaghizadeh mentioned and then as @janblumenkamp said. Can you please shed on some light on this? |
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days |
I got back to this and managed to get everything working by using the L4T docker container instead and compiling Torch TensorRT in there according to the instructions. I use version 1.4.0, but a minor fix is necessary and a patch has to be applied (refer #2118 ). In case this is useful for anyone, this is the patch (save as
And the Dockerfile:
|
@janblumenkamp Jan, do you think you can do all of us a big favor and upload your .whl file here? |
Sure, this is compiled in the above mentioned container with l4t-r35.3.1 so no promises that it works in any other environment. You have to extract the whl file first from the zip archive. It will try installing torch, so install it with |
❓ Question
Hi, as there seems to be no prebuilt python binary, just wanted to know if there is any way to install this package on jetson 5.0?
What you have already tried
I tried normal installation for jetson 4.6 which fails, I aslo tried this https://forums.developer.nvidia.com/t/installing-building-torch-tensorrt-for-jetpack-5-0-1-dp-l4t-ml-r34-1-1-py3/220565/6 which gives me this error:
Environment
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_May__4_00:02:26_PDT_2022
Cuda compilation tools, release 11.4, V11.4.239
Build cuda_11.4.r11.4/compiler.31294910_0
The text was updated successfully, but these errors were encountered: