You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description
A clear and concise description of what the bug is.
I am trying to run a simple inference using pytorch backend on Jetson nano. My triton installation works for all the other backends.
Except for pytorch which keeps on throwing the same error.
Triton Information
What version of Triton are you using?
2.19.0 for Jetson
Are you using the Triton container or did you build it yourself?
followed your installation steps in the docs To Reproduce
Steps to reproduce the behavior.
I first converted simple model to torchscript as follows :
model = torch.hub.load('pytorch/vision:v0.10.0', 'vgg11', pretrained=True).cpu()
model.eval()
example = torch.rand(1, 3, 224, 224, )
traced_script_module = torch.jit.trace(model, example)
traced_script_module(example)
traced_script_module.save("vgg.pt")
then I followed the examples to send inference requests with gRPC :
input0.set_data_from_numpy(test_img)
output = tritongrpcclient.InferRequestedOutput(output_name)
response = triton_client.infer(model_name=model_name,
inputs=[input0], outputs=[output])
but i kept on having this error :
InferenceServerException: [StatusCode.INTERNAL] PyTorch execute failure: forward() Expected a value of type 'Tensor' for argument 'x' but instead found type 'NoneType'.
Position: 1
Declaration: forward(torch.torchvision.models.vgg.VGG self, Tensor x) -> (Tensor)
Exception raised from checkArg at /opt/package_build/pytorch/aten/src/ATen/core/function_schema_inl.h:186 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xd8 (0x7f6f903888 in /opt/tritonserver/backends/pytorch/libc10.so)
frame #1: + 0x17b57d0 (0x7f711767d0 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #2: + 0x17b5170 (0x7f71176170 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #3: torch::jit::Method::operator()(std::vector<c10::IValue, std::allocatorc10::IValue >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, c10::IValue> > > const&) const + 0x378 (0x7f73500708 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #4: + 0x1efd8 (0x7f874aefd8 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #5: + 0x158fc (0x7f874a58fc in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #6: + 0x17e78 (0x7f874a7e78 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #7: TRITONBACKEND_ModelInstanceExecute + 0x324 (0x7f874a8e64 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #8: + 0x215c1c (0x7f87e8ec1c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #9: + 0x21628c (0x7f87e8f28c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #10: + 0xd8ae8 (0x7f87d51ae8 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #11: + 0x211cf0 (0x7f87e8acf0 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #12: + 0xbbe94 (0x7f8787de94 in /usr/lib/aarch64-linux-gnu/libstdc++.so.6)
frame #13: + 0x7088 (0x7f87c28088 in /lib/aarch64-linux-gnu/libpthread.so.0)
The text was updated successfully, but these errors were encountered:
Description
A clear and concise description of what the bug is.
I am trying to run a simple inference using pytorch backend on Jetson nano. My triton installation works for all the other backends.
Except for pytorch which keeps on throwing the same error.
Triton Information
What version of Triton are you using?
2.19.0 for Jetson
Are you using the Triton container or did you build it yourself?
followed your installation steps in the docs
To Reproduce
Steps to reproduce the behavior.
I first converted simple model to torchscript as follows :
model = torch.hub.load('pytorch/vision:v0.10.0', 'vgg11', pretrained=True).cpu()
model.eval()
example = torch.rand(1, 3, 224, 224, )
traced_script_module = torch.jit.trace(model, example)
traced_script_module(example)
traced_script_module.save("vgg.pt")
then I followed the examples to send inference requests with gRPC :
input0.set_data_from_numpy(test_img)
output = tritongrpcclient.InferRequestedOutput(output_name)
response = triton_client.infer(model_name=model_name,
inputs=[input0], outputs=[output])
but i kept on having this error :
InferenceServerException: [StatusCode.INTERNAL] PyTorch execute failure: forward() Expected a value of type 'Tensor' for argument 'x' but instead found type 'NoneType'.
Position: 1
Declaration: forward(torch.torchvision.models.vgg.VGG self, Tensor x) -> (Tensor)
Exception raised from checkArg at /opt/package_build/pytorch/aten/src/ATen/core/function_schema_inl.h:186 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xd8 (0x7f6f903888 in /opt/tritonserver/backends/pytorch/libc10.so)
frame #1: + 0x17b57d0 (0x7f711767d0 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #2: + 0x17b5170 (0x7f71176170 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #3: torch::jit::Method::operator()(std::vector<c10::IValue, std::allocatorc10::IValue >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, c10::IValue> > > const&) const + 0x378 (0x7f73500708 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #4: + 0x1efd8 (0x7f874aefd8 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #5: + 0x158fc (0x7f874a58fc in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #6: + 0x17e78 (0x7f874a7e78 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #7: TRITONBACKEND_ModelInstanceExecute + 0x324 (0x7f874a8e64 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #8: + 0x215c1c (0x7f87e8ec1c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #9: + 0x21628c (0x7f87e8f28c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #10: + 0xd8ae8 (0x7f87d51ae8 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #11: + 0x211cf0 (0x7f87e8acf0 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #12: + 0xbbe94 (0x7f8787de94 in /usr/lib/aarch64-linux-gnu/libstdc++.so.6)
frame #13: + 0x7088 (0x7f87c28088 in /lib/aarch64-linux-gnu/libpthread.so.0)
The text was updated successfully, but these errors were encountered: