-
Notifications
You must be signed in to change notification settings - Fork 7.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When use a exported mask rcnn caffe2 model to infer an image, get error [enforce fail at batch_permutation_op.cu:66] X.dim32(0) > 0. 0 vs 0 #1895
Comments
Before pytorch/pytorch#39851, the model throws an exception if no object is detected. |
@ppwwyyxx Thanks for you answer!
for the same line code:
Could you please tell me how to modify the code in caffe2_mask_rcnn.cpp. And I have seen that you say this error can be catch in #1724 , but I use
it seems not work. Is there another way to catch it? By the way, I am a bit confused of the definition of 'no object is detected.', when I use python code to infer the example image and visualize the result, if I reduce the confidence score threshold to 0.1 or lower, there is one or some objects. So, it makes me confused |
Unfortuantely in PyTorch 1.6 protobuf has to be compiled and linked to the program.
To link to it: --- i/tools/deploy/CMakeLists.txt
+++ w/tools/deploy/CMakeLists.txt
@@ -10,7 +10,7 @@ find_package(OpenCV REQUIRED)
add_executable(caffe2_mask_rcnn caffe2_mask_rcnn.cpp)
target_link_libraries(
caffe2_mask_rcnn
- "${TORCH_LIBRARIES}" gflags glog ${OpenCV_LIBS})
+ "${TORCH_LIBRARIES}" gflags glog protobuf ${OpenCV_LIBS})
set_property(TARGET caffe2_mask_rcnn PROPERTY CXX_STANDARD 14) |
@ppwwyyxx Thanks for your answer, but after I compiled and linked protobuf 3.11.4 to the program, there is still a error output when building it:
for this line code:
And the output for ld -lprotobuf --verbose is:
|
The above command works in the official docker. In your environment it may need |
@ppwwyyxx Thanks, it works! |
Instructions To Reproduce the 🐛 Bug:
git diff
) or what code you wroteterminate called after throwing an instance of 'c10::Error'
what(): [enforce fail at batch_permutation_op.cu:66] X.dim32(0) > 0. 0 vs 0
Error from operator:
input: "614" input: "609" output: "input.68" name: "" type: "BatchPermutation" device_option { device_type: 1 device_id: 0 }frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, void const*) + 0x67 (0x7f1c2eae5787 in /usr/local/libtorch/lib/libc10.so)
frame #1: caffe2::BatchPermutationOp<float, caffe2::CUDAContext>::RunOnDevice() + 0x440 (0x7f1be3fb3670 in /usr/local/libtorch/lib/libtorch_cuda.so)
frame #2: + 0x35d36e2 (0x7f1be3f6d6e2 in /usr/local/libtorch/lib/libtorch_cuda.so)
frame #3: caffe2::SimpleNet::Run() + 0x196 (0x7f1c1f339336 in /usr/local/libtorch/lib/libtorch_cpu.so)
frame #4: caffe2::Workspace::RunNet(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x8a2 (0x7f1c1f385162 in /usr/local/libtorch/lib/libtorch_cpu.so)
frame #5: newrun(cv::Mat&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x570 (0x4716d3 in ./mhp_parsing)
frame #6: main + 0x269 (0x474ddc in ./mhp_parsing)
frame #7: __libc_start_main + 0xf0 (0x7f1bdf665830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #8: _start + 0x29 (0x46fc09 in ./mhp_parsing)
Aborted (core dumped)
``` 4. please simplify the steps as much as possible so they do not require additional resources to run, such as a private dataset. use the exported model to infer images, get error when image like this item: ![1](https://user-images.githubusercontent.com/24741969/89980596-c836c000-dca4-11ea-93a3-f6f3f35c8a2f.jpg)Expected behavior:
I have searched the issuses and find one similar: #1580 , but didn't get how to fix this problem.
And when I use some of my finetuned models to infer this image, one works, while others failed.
Environment:
Provide your environment information using the following command:
sys.platform linux
Python 3.7.7 (default, May 7 2020, 21:25:33) [GCC 7.3.0]
numpy 1.18.5
detectron2 0.2 @/home/qiu/Projects/detectron2/detectron2
Compiler GCC 7.5
CUDA compiler CUDA 10.1
detectron2 arch flags sm_61
DETECTRON2_ENV_MODULE
PyTorch 1.5.1 @/home/qiu/anaconda3/envs/de2/lib/python3.7/site-packages/torch
PyTorch debug build False
GPU available True
GPU 0 GeForce GTX 1080 Ti
CUDA_HOME /usr/local/cuda-10.1
Pillow 7.2.0
torchvision 0.6.0a0+35d732a @/home/qiu/anaconda3/envs/de2/lib/python3.7/site-packages/torchvision
torchvision arch flags sm_35, sm_50, sm_60, sm_70, sm_75
fvcore 0.1.1.post20200716
cv2 4.3.0
PyTorch built with:
GCC 7.3
C++ Version: 201402
Intel(R) Math Kernel Library Version 2020.0.1 Product Build 20200208 for Intel(R) 64 architecture applications
Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
OpenMP 201511 (a.k.a. OpenMP 4.5)
NNPACK is enabled
CPU capability usage: AVX2
CUDA Runtime 10.1
NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
CuDNN 7.6.3
Magma 2.5.2
Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
If your issue looks like an installation issue / environment issue,
please first try to solve it yourself with the instructions in
https://detectron2.readthedocs.io/tutorials/install.html#common-installation-issues
The text was updated successfully, but these errors were encountered: