Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enforce fail at operator.cc:76] blob != nullptr. op Cast: Encountered a non-existing input blob: data #1993

Closed
ruilongzhang opened this issue Sep 9, 2020 · 6 comments

Comments

@ruilongzhang
Copy link

I0909 03:35:28.846722 539 server.cc:120] Initializing Triton Inference Server
I0909 03:35:30.314159 539 model_repository_manager.cc:805] loading: zhuangshiwu:1
I0909 03:35:30.677072 539 netdef_backend.cc:201] Creating instance zhuangshiwu_0_0_gpu0 on GPU 0 (6.1) using init_model.netdef and model.netdef
[E init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
[E init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
[E init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
[W workspace.cc:170] Blob data not in the workspace.
E0909 03:35:47.585097 539 model_repository_manager.cc:1007] failed to load 'zhuangshiwu' version 1: Internal: load failed for 'zhuangshiwu': [enforce fail at operator.cc:76] blob != nullptr. op Cast: Encountered a non-existing input blob: data
frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, void const*) + 0x67 (0x7f90a0357ab7 in /opt/tritonserver/lib/pytorch/libc10.so)
frame #1: caffe2::OperatorBase::OperatorBase(caffe2::OperatorDef const&, caffe2::Workspace*) + 0x6d9 (0x7f911e930ef9 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #2: + 0x32ab36a (0x7f90a3e0236a in /opt/tritonserver/lib/pytorch/libtorch_cuda.so)
frame #3: std::_Function_handler<std::unique_ptr<caffe2::OperatorBase, std::default_deletecaffe2::OperatorBase > (caffe2::OperatorDef const&, caffe2::Workspace*), std::unique_ptr<caffe2::OperatorBase, std::default_deletecaffe2::OperatorBase > ()(caffe2::OperatorDef const&, caffe2::Workspace)>::_M_invoke(std::_Any_data const&, caffe2::OperatorDef const&, caffe2::Workspace*&&) + 0x23 (0x7f911ea34db3 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #4: + 0x1cac1bc (0x7f911e92e1bc in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #5: caffe2::CreateOperator(caffe2::OperatorDef const&, caffe2::Workspace*, int) + 0x499 (0x7f911e92f699 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #6: caffe2::SimpleNet::SimpleNet(std::shared_ptr<caffe2::NetDef const> const&, caffe2::Workspace*) + 0x193 (0x7f911e920833 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #7: + 0x1ca134e (0x7f911e92334e in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #8: std::_Function_handler<std::unique_ptr<caffe2::NetBase, std::default_deletecaffe2::NetBase > (std::shared_ptr<caffe2::NetDef const> const&, caffe2::Workspace*), std::unique_ptr<caffe2::NetBase, std::default_deletecaffe2::NetBase > ()(std::shared_ptr<caffe2::NetDef const> const&, caffe2::Workspace)>::_M_invoke(std::_Any_data const&, std::shared_ptr<caffe2::NetDef const> const&, caffe2::Workspace*&&) + 0x23 (0x7f911e8fac23 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #9: caffe2::CreateNet(std::shared_ptr<caffe2::NetDef const> const&, caffe2::Workspace*) + 0x4a5 (0x7f911e8ecd25 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #10: caffe2::Workspace::CreateNet(std::shared_ptr<caffe2::NetDef const> const&, bool) + 0x11d (0x7f911e9770bd in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #11: caffe2::Workspace::CreateNet(caffe2::NetDef const&, bool) + 0xa0 (0x7f911e978610 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #12: + 0x1ca6db1 (0x7f911e928db1 in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #13: Caffe2WorkspaceCreate + 0xc4d (0x7f911e92a27d in /opt/tritonserver/lib/pytorch/libtorch_cpu.so)
frame #14: + 0x2516da (0x7f9181a5d6da in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #15: + 0x253a52 (0x7f9181a5fa52 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #16: + 0x249f78 (0x7f9181a55f78 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #17: + 0x120d4f (0x7f918192cd4f in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #18: + 0x12cde5 (0x7f9181938de5 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #19: + 0xbd6df (0x7f9180f006df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #20: + 0x76db (0x7f91815f46db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #21: clone + 0x3f (0x7f91805bd88f in /lib/x86_64-linux-gnu/libc.so.6)

error: creating server: Internal - failed to load all models

@deadeyegoodwin
Copy link
Contributor

Is it a torchscript model you are trying to load? Are you able to load the model using the pytorch python API?

@CoderHam
Copy link
Contributor

It appears to be a caffe2 model. However it too should be able to run with the python API

@ruilongzhang
Copy link
Author

@CoderHam Thank you for your answer, how to use caffe2 model with triton-inference-server?

@ruilongzhang
Copy link
Author

@deadeyegoodwin i can load the caffe2 model with pytorch api , but using triton-inference-server load failed

@CoderHam
Copy link
Contributor

CoderHam commented Oct 9, 2020

I would recommend I to convert it to an Onnx or TRT model and see if it works correctly. Caffe 2 is losing popularity and will likely be deprecated soon.

@deadeyegoodwin
Copy link
Contributor

Caffe2 is no longer supported by Triton. Please try converting you model to a supported format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants