You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Starting TorchScript export with torch 1.7.0+cpu...
/home/yasin/envs/yolo-torch1.6/lib/python3.8/site-packages/torch/jit/_trace.py:934: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
module._c._create_method_from_trace(
TorchScript export success, saved as yolov5s.torchscript.pt
Expected behavior
But, when i use this .torchscript.pt in c++ infrence program(this repo) got errors:
terminate called after throwing an instance of 'c10::Error'
what(): isTuple() INTERNAL ASSERT FAILED at "/home/yasin/Projects/CVision/yoloInference/YOLOv5-LibTorch/libtorch/include/ATen/core/ivalue_inl.h":927, please report a bug to PyTorch. Expected Tuple but got GenericList
Exception raised from toTuple at /home/yasin/Projects/CVision/yoloInference/YOLOv5-LibTorch/libtorch/include/ATen/core/ivalue_inl.h:927 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f63d2fb2b89 in /home/yasin/Projects/CVision/yoloInference/YOLOv5-LibTorch/libtorch/lib/libc10.so)
frame #1: c10::IValue::toTuple() && + 0x142 (0x5652b80d28d2 in /home/yasin/Projects/CVision/yoloInference/YOLOv5-LibTorch/bin/YOLOv5LibTorch)
frame #2: main + 0x7fc (0x5652b80cd00c in /home/yasin/Projects/CVision/yoloInference/YOLOv5-LibTorch/bin/YOLOv5LibTorch)
frame #3: __libc_start_main + 0xe7 (0x7f63be207b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #4: _start + 0x2a (0x5652b80cdfba in /home/yasin/Projects/CVision/yoloInference/YOLOv5-LibTorch/bin/YOLOv5LibTorch)
Environment
OS: UBUNTU 18.6
PyTorch 1.5.1, 1.6.0, 1.7.0
GPU CUDA and Cpu Only (I need inference cpu only, but export.py runs by cudaand without cuda)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
#723 🐛 Bug
Model export using
export.py
fails by running this command, after saving success message:Output of running
export.py
script:Expected behavior
But, when i use this
.torchscript.pt
in c++ infrence program(this repo) got errors:Environment
Additional context
I run this default export command in Google Colab and git same error:
https://colab.research.google.com/drive/1oYSImVjmulMhrSrU--f1He-2ScIkGyIp#scrollTo=qwH26hf46sco&line=3&uniqifier=1
The text was updated successfully, but these errors were encountered: