You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I trained a ssdlite320_320 mobilenetv3 large with Widerface datasets for face detection task.
Here is what I received when running the torch_tensorrt.compile():
(capstone) jetson@jetson-desktop:~/FaceRecognitionSystem/jetson/backend/python$ python test.py
/home/jetson/miniforge-pypy3/envs/capstone/lib/python3.6/site-packages/torch/nn/modules/module.py:1102: UserWarning: torch.meshgrid: in an upcoming re
lease, it will be required to pass the indexing argument. (Triggered internally at /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/Te
nsorShape.cpp:2157.)
return forward_call(*input, **kwargs)
Wrapper works successfully!
/home/jetson/miniforge-pypy3/envs/capstone/lib/python3.6/site-packages/torch/jit/_trace.py:965: TracerWarning: Encountering a list at the output of th
e tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider u
sing a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and kn
ow the side effects, pass strict=False to trace() to allow this behavior.
argument_names,
Tracing successful!
WARNING: [Torch-TensorRT] - For input x, found user specified input dtype as Float16, however when inspecting the graph, the input type expected was i
nferred to be Float
The compiler is going to use the user setting Float16
This conflict may cause an error at runtime due to partial compilation being enabled and therefore
compatibility with PyTorch's data type convention is required.
If you do indeed see errors at runtime either:
Remove the dtype spec for x
Disable partial compilation by setting require_full_compilation to True
Yeah the issue is these versions were released almost 4 years ago and more than likely these issues have been solved in later versions of Torch-TensorRT (if not in TorchScript then in the Dynamo frontend) so its unlikely we (or PyTorch if its an issue with them) are going to backport a fix, especially since software support for Maxwell/Jetson Nano ended a few years back. You could try to work around the limitation using a NGC container with newer versions of software but I suspect you can only go so far forward as you will hit PyTorch SM support issues eventually
Bug Description
I trained a ssdlite320_320 mobilenetv3 large with Widerface datasets for face detection task.
Here is what I received when running the
torch_tensorrt.compile()
:To Reproduce
Steps to reproduce the behavior:
I expected ssdlite320 mobilenetv3 should be supported to convert to tensorrt
Environment
conda
,pip
,libtorch
, source): sourceThe text was updated successfully, but these errors were encountered: